Hello everyone, I’m Giordano.
Wind Of Softwing is a project I conceived in December 2024. My goal is to use AI before AI uses me.
In my social posts I will explain this concept from my point of view, regarding AI and its application to software creation, rather than its implementation inside software as a future landscape.
Welcome to my (for now) blog.
POSTS
7 - Will we be able to adapt in time?
When, in relation to our current basic knowledge and what society asks from us, can someone be defined as “digitally illiterate”?
When should we start worrying about all those people who will depend on others for technology ?
Today, at the time I’m writing, digital illiteracy is still widespread.
Many people are not able to perform those actions that are now the “minimum required” to interact properly with institutions and people: from managing a simple email to understanding basic web security .
Unfortunately, these people—often even young ones—who are not tech‑savvy for various reasons, will end up depending on someone else even for the simplest actions. And those actions will change over time, requiring more and more skills and affinity with technology.
Here we reach a crucial point:
How fast is technological development compared to people’s ability to adapt ?
How can we avoid being overwhelmed by a wave of digital illiteracy?
For example, those who can use AI are greatly advantaged in the global job market. Currently, roughly 60% of the population can use AI at a basic level (chatbots), already surpassing many others in skills and efficiency .
This knowledge is already significant today, even if it is not yet part of the minimum skills required by society. But in a few years it probably will be.
Will people be able to adapt and learn the “new minimum required” fast enough for the pace of development ?
If some of you reading this don’t feel able to keep up, you don’t need to be afraid: just read a bit during the day to start absorbing the basics of technology . Don’t get caught unprepared or you will be overtaken by those who use information well—information that is free and available to everyone.
Don’t be afraid of technology: learn it , evaluate it and use it before you depend on it without knowing it.
If you are interested in these topics, follow us on social
6 - Why do we keep evolving?
Where does the need for technological innovation come from?
The need to evolve originally comes from our body’s physiological demands: a need for easier survival, reducing the risks we were exposed to.
Over time, those physiological demands decreased thanks to technologies and comfort, until we finally satisfied many of the needs and comforts we required .
So why do we keep evolving?
Today humans are driven by the desire to create objects or concepts that are not necessary for survival, but that increase comfort, distractions and optimization of what already exists . This path was set in motion by earlier periods, after the time when we had to hunt to survive.
I would personally rename that period as the “phase of changing the evolutionary goal”.
Because from the earliest inventions meant to ease survival, life became calmer than before, and this pushed humans to slowly start imagining ways to fill the time gained , or to optimize inventions already created .
Over the centuries, humans found more and more free time, filling it with activities that don’t directly serve basic survival , but with optimization of what already exists (including medical care, very important), and with activities that satisfy the mind: games , social media , gadgets , the concept of “luxury” , and many others—concepts invented to fill psychological voids created by free time.
If you are interested in these topics, follow us on social
5 - Will critical AI errors save us from replacement?
Today humans still cover many jobs , jobs that in the future could be covered by AI.
This is already happening, but how much time do we have left before we exit the scene?
AI still has a high error margin and therefore, at the moment, cannot replace humans in many tasks. But there will come a day when AI’s error margin will be lower than the probability of human error and, together with its ability to be applied, it will lead to human replacement.
But will it really?
From a probabilistic point of view, people expect that it’s enough to wait until AI makes fewer mistakes than humans, but in reality it’s not that simple.
The real factor is: within that percentage of errors—AI and human alike—who statistically makes more critical errors?
Exactly: the criticality of errors will keep humans useful longer than expected.
As long as AI makes fewer mistakes than humans, but there is a higher risk of critical errors in its field of application, humans are not replaceable.
Only when the criticality percentage becomes lower in AI will the time come for our progressive exit.
If you are interested in these topics, follow us on social
4 - Are we risking the end of human society?
How dependent are we on the internet?
On cloud systems?
On instant communications?
Have we ever really asked ourselves?
And what if one day all of this were to disappear?
If for any reason all of this were to disappear, would we have a Plan B to keep this highly connected society working?
I think not.
People already spoke about the Millennium Bug as a problem that would end human society back in 1998, and meanwhile our dependence on technology increased considerably.
So why should we also add AI to the list of things we depend on? Something that basically thinks for us? It seems there’s more at stake than a global connection or an online database.
In a not too distant future, if we keep these rates of technological evolution, AI integration into everything where it is possible will probably lead to the loss of our neural network , our intellectual identity, to make room for a huge need for something that thinks for us, even for the simplest actions.
How can we manage all this responsibly and carefully ? By using AI as support and not as a substitute, and by creating a social model that relies on the network but has a “second safety layer” ;
a hybrid management between Stand‑Alone systems and Cloud usage , using AI to create and optimize this layer, but without making that layer dependent on AI.
If you are interested in these topics, follow us on social
3 - Algorithms or integrated AI ?
How will the future of AI integration into existing systems look?
Will it be a complete replacement of stand‑alone algorithms, or support for them?
Stand‑alone algorithms are created to perform very specific tasks, and the result depends on well-defined code that can be improved until it performs its function impeccably—at the cost of a limited range, because it only does what is implemented in the code.
But AI?
AI currently isn’t fully reliable due to remaining error margins, but unlike algorithms it spans a much wider range of topics thanks to learning, and it can understand adjacent topics around its specialization.
So:
- Algorithms give maximum reliability and stability for their task, but features are harder to implement;
- AI is less reliable and stable for its task, features are easier to implement and can cover adjacent topics acquired through learning.
This raises questions:
Will there be a time when AI can provide the same reliability and certainty about correctness of output or data processing?
Until then, how should we use AI?
In my opinion, we should not aim for a complete replacement of algorithms as the final goal; instead, we should integrate AI into them—or better yet, use AI to produce algorithms dedicated to a specific task, but without implementing AI inside those algorithms.
The result is to leverage AI’s ease in implementing tasks while keeping the algorithm’s reliability.
In conclusion, AI should (in my view) be limited to creating algorithms, giving us the certainty in responses that AI itself may never provide 100%.
What do you think?
Are you for using algorithms created with AI help?
Or for systemic AI integration in common software (or not)?
The support AI gives us—especially public models—could lead to an atrophy in how we develop solutions to problems in life or at work, while boosting the brain area predisposed to crafting good prompts to ask AI for solutions .
Probably, a near future will come where people have logic more oriented to creating correct prompts than to finding solutions with their own strengths. And in my opinion, AI dependence will begin from there.
Like when the calculator was invented: people used “paper and pen” to do calculations, while now calculators are everywhere (offices, stores, supermarket registers, etc.).
AI will be the same, but amplified, including other fields of logic.
How do we avoid this?
Even just awareness and acceptance of this concept helps when using AI: it keeps the user in their own reasoning space, using AI as support, not as a substitute.
Another behavior is to understand the answers and solutions AI gives you, to see how it arrived there.
How to do it immediately while using it?
Many AI chats show the reasoning they used before the final answer. Reading that reasoning is important cognitively and helps you spot where the AI made a mistake.
More posts will follow where I share my views on AI and how to mitigate negative effects it might have on our psyche in the future.
1 - Mental consequences of integrated AI and non-integrated AI
Will AI usage in society gradually develop a kind of cognitive handicap even for basic reasoning?
If so, will it be a direct consequence of the support it offers, substituting our work?
Or will it lead to dependence on its use without totally replacing our work?
This raises another question, in my opinion much more important: will we be able, in the future, to be self‑sufficient again if this support suddenly disappears?
I believe these are some of the most important questions for the near future.
In the next posts I will share my ideas to protect human integrity and how, in my opinion, we should manage AI usage.
Social