WOS

Hi everyone, I’m Giordano.
Wind Of Softwing is a project I created in December 2024. My goal is to use AI before AI uses me.
In my social posts I explain this concept from my point of view: AI applied to software creation, rather than AI being integrated into software as a future landscape.
Welcome to my (for now) simple blog.



POSTS


7 - Will we adapt in time?

Considering our current baseline knowledge and what society requires from us, when can a person be defined as “digitally illiterate”?
When should we worry about all those people who will depend on others for technology-related tasks ?

As of today, at the time I’m writing, digital illiteracy is still very widespread.
Many people cannot perform actions that are now the “minimum required” to interact properly with institutions, services, and other people: from managing a simple email to understanding basic web security .
Unfortunately, these people—often including young people—will end up depending on someone else even for simple actions. And those actions will keep changing over time, requiring more and more familiarity with technology.

Here we reach a crucial point:

How fast is technological development compared to people’s ability to adapt ?
How can we avoid being overwhelmed by a wave of digitally illiterate people?

For example, being able to use AI gives you a major advantage in the global job market. Today, roughly about 60% of the population can use AI in a basic way (chatbots), already surpassing many others in skills and efficiency .
This knowledge is already significant today even if it’s not yet part of the minimum required skills—but in a few years it likely will be.

Will people be able to adapt and learn the “new minimum required” fast enough to match the pace of development ?

If you don’t feel able to keep up, you don’t need to be afraid: just read a little every day and you’ll start absorbing the basics of technology . Don’t get caught unprepared, or you’ll be surpassed by those who can use information well—information that is free and accessible to everyone.

Don’t be afraid of technology: learn it , evaluate it , and use it before you end up depending on it without understanding it.

If you’re interested in these topics, follow us on social .

6 - Why do we keep evolving?

Where does the need for technological innovation come from?

The need to evolve originally came from physiological needs—survival made easier, reducing risks we were exposed to .
Over time, those needs decreased thanks to technology and comfort, until we could satisfy many necessities and conveniences .

So why do we keep evolving?

Today humans are driven to create objects and concepts not strictly necessary for survival, but to increase comfort, distraction, and optimization of what already exists . This path started after the era when we had to hunt to survive.
I would personally rename this period as “the phase of changing evolutionary goals.”

Because even from the first inventions aimed at survival, we gained more calm and safety than before, which pushed us to gradually find ways to occupy the time we gained , or optimize what had already been invented .

Over centuries, humans found more and more free time, filling it with activities not directly tied to basic survival , but to optimizing what exists (including medicine) and to “mind satisfaction” activities: games , social , gadgets , the concept of “luxury” , and many others—often created to fill psychological gaps produced by free time.

If you’re interested in these topics, follow us on social .

5 - Will critical AI mistakes save us from replacement?

Today humans still cover many job roles that in the future could be performed by AI.

This is already happening—but how much time do we have before we start leaving the stage?

AI still has a significant error margin, so for now it cannot replace humans in many tasks. But a day may come when AI’s error rate is lower than human error probability, and—combined with its broad applicability—this will lead to replacement.

But will it really be that simple?

In probability terms, one might think it’s enough for AI to “make fewer mistakes than humans.” But that’s not the full story.
The real factor is: within that percentage of errors (for both AI and humans), who statistically makes more *critical* mistakes?

Exactly: the *severity* of errors can keep humans useful longer than expected.
As long as AI makes fewer errors overall, but its mistakes carry a higher risk of being critical in a specific domain, humans won’t be fully replaceable .

Only when the “criticality rate” becomes lower in AI will we reach the moment of progressive replacement .

If you’re interested in these topics, follow us on social .

4 - Are we risking the end of human society?

How dependent are we on the internet?
On cloud systems?
On instant communications?
Have we ever really asked ourselves?

And what if one day all of this was gone?

If for any reason all of this disappeared, would we have a Plan B to keep this heavily connected society running?

I think we don’t.

People talked about the Millennium Bug as something that could end society back in 1998—and meanwhile our dependence on technology has increased a lot.
So why add AI to the list of things we depend on?
Something that essentially “thinks for us”? This is more than just global connectivity or an online database.

In a not-so-distant future, if we keep evolving at this pace, AI integration everywhere it can be integrated may lead to the weakening of our neural network —our intellectual identity—replaced by a growing need for something that thinks for us even in simple actions.

How can we manage this responsibly ?
By using AI as support, not as a substitute, and by building a society model that relies on the internet but has a “second parachute layer” : a hybrid approach between stand-alone systems and cloud systems , using AI to create and optimize that layer—but without making it dependent on AI itself.

If you’re interested in these topics, follow us on social .

3 - Algorithms or integrated AI ?

What will the future of AI integration into existing systems look like?
Will it be a complete replacement of stand-alone algorithms—or support for them?

Stand-alone algorithms are designed for specific tasks. Their output depends on well-defined code, which can be improved until it performs its job very reliably—at the cost of a limited action range (it does only what’s coded).
But AI?

AI is currently not fully reliable due to error margins. But unlike algorithms, it can cover a much broader range of topics thanks to learning, including adjacent areas.

So:

- Algorithms: maximum reliability and stability, but harder to implement complex features;

- AI: less reliable/stable, but easier to implement features and can cover adjacent topics learned through training.



Questions arise:
Will there be a moment when AI can provide the same reliability and certainty in outputs/data processing?
Until then, how should we use AI?

In my opinion, we should not aim for full replacement of algorithms, but for integration into them—or better: use AI to produce algorithms dedicated to a precise task, without embedding AI inside the final software.
This gives us the ease of building features with AI while keeping the reliability of the resulting algorithm.

What do you think?

Are you in favor of algorithms created with AI assistance?
Or full systemic AI integration in common software?

Let me know on social .

2 - Think with AI, don’t let only AI think

The support AI gives us—especially public models—could lead to an atrophy in how we develop solutions to problems in life or work, while improving instead the part of the brain that focuses on asking the right questions (prompting) to get solutions .
A near future might come where people rely more on creating good prompts than on solving problems with their own logic.
In my opinion, AI dependence will start from this factor.

Like when calculators were invented: people used “paper and pen”; now calculators are everywhere (offices, stores, supermarkets, etc.).
AI will be similar—but amplified, covering more areas of logic.

How to avoid this?
Even just being aware of it helps: use AI as support, not as a substitute for your brain.
Another habit is understanding the answers/solutions you get (in chat) to see how the solution was derived.

The quickest way?
Many AI chats show the reasoning used before the final answer. Reading that reasoning is important cognitively and helps you spot where AI is wrong.

More posts will follow with my ideas on AI and how to reduce its negative effects on our psyche.

If you’re interested, follow us on social .

1 - Mental consequences of integrated AI vs non-integrated

Will the use of AI in our society gradually lead to a kind of cognitive handicap, even for basic reasoning?
If so, will it be a direct consequence of the support it offers—replacing our work?
Or will it lead to dependence without fully replacing our actions?

Here’s an even more important question:
In the future, will we be able to become self-sufficient again if this support suddenly disappears?

I believe these are the most important questions we should ask ourselves for the near future.

In the next posts I’ll share my ideas to safeguard human integrity and how (in my view) we should manage AI usage.

If you’re interested, follow us on social .




About