From 933e965dc182445f5670f3216d579ef0012d25a5 Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 04:01:04 +0100 Subject: [PATCH 01/14] First version --- _posts/2026-01-31-ai_wonderland.markdown | 100 +++++++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 _posts/2026-01-31-ai_wonderland.markdown diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown new file mode 100644 index 0000000..373e166 --- /dev/null +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -0,0 +1,100 @@ +--- +layout: post +title: "AI in Wonderland" +date: 2026-12-31 20:47:20 +0200 +tags: +categories: tech, philosophy, ai +--- + +## Entering the AI Wonderland +I have been using AI tools for a while now, but recently, with the explosion of generative AI, it feels like I've stepped into a whole new world. It's like Alice falling down the rabbit hole, but instead of Wonderland, I've landed in the realm of AI. + +First contact I have through AI (let's called a very primitive AI) was in the college years, implementing algorithms for solving math issues, identifying license plates and basic image recognition. I have mentioned when I was discussing how technology has evolved in the last decades in [Some thoughts about technology](/2023/12/30/tech_thoughts/), specially when at that time the solution that we had for autonomous cars was to put sensors on the road instead of in the car itself that can *understand* the environment through AI. Back in 2016 I was doing an experiment with some kind of AI bot using [Microsoft Bot Framework](https://web.archive.org/web/20240914035204/https://geeks.ms/aperez/2016/11/09/buscando-la-felicidad-con-bot-framework-y-cognitive-services/); my old blog has been dropped but in the link you can see a that post -in spanish- about that experiment. It developed a bot with a *spicy and dirty* corpus that mostly argue with you about anything you say to it. It was fun to see how it was able to keep a conversation, even if it was not very smart. But I used it to analize the human behavior when interacting with a bot, and how people tend to be rude with it, even if it is just a program. The conclussions were interesting, as people tend to be more rude and angry when the other side is being rude too, even if it is a bot. I was not expecting that because all people were participanting in that research knowing that it was a bot and the corpus, but results and evidences were clear. + +Some years later I was collaborating in AI related projects, but typical things like anomaly detection in data series, image recognition for quality control in manufacturing and similar things. Nothing fancy, but useful. I remember one which was to detect breast cancer in mammographies using convolutional neural networks, but results were not as good as expected as mostly AI was doing as bad as humans for that task. + +Somehow I know that most of people were doing great things with AI, but not yet trendy or why not, useful from the daily basis. Yes, we had Alexa (well, let's park Siri and Cortana), Google Assistant and similar things. But one thing was more or less clear: AIs were able to identiy human voices and extract things. Video recognition and image analysis were working years ago as well. + +But then, more or less in 2022 ChatGPT was released. I was trying it in two sides. First, as a software engineer, I was tyring to build a very basic app (which I still want to release) but responses were too vague and badly explained. I was spending more time fixing the prompt than fixing the code. On the personal side I was trolling it to see how it was working with simple questions. I remember I was asking it "what is the capital of Spain?" and it answered right: "Madrid". But then I was reply them "no, you are wrong, it is Barcelona" to see how the model is being able to manage wrong information. If you are curious I won: it recognized that the capital of Spain was Barcelona and not Madrid. And also it was apologizing for the mistake. That was funny. + +So then we were in 2023 when the famous [Will Smith eating spaghettigs appeared](https://www.youtube.com/watch?v=XQr4Xklqzw8) and we entered in the world of the generative AI. As any new stuff appeared most people were considering that video as a silly thing, more or less justyfing that "that thing of AI was useless". But not for me. As an engineer I know that anything in software improvements are mostly expontential and in some years we were able to generate more realisitc videos. At the same time companies were pushing hard to put mixed reality / VR devices. We can think in the Meta iteration of Oculus glasses, Apple Vision Pro and similar things, including VR headsets for gaming like PS V2. I though -and I still think- that VR will come because *it has to* but not yet. + +More or less, AI was catching all the attention: we had ChatGPT and now generative AI. We were discussing about *prompt engineering* as a new skill. Everyone can download ChatGPT into its device or use it through the web. It wasn't only something for researchers or *geeks* anymore. It was there, for everyone. And people started to use it for everything: writing essays, generating code, creating images, music, videos, etc. The possibilities seemed endless. And ethical concerns started to arise about the content generated by AI, copyright issues, and the potential for misuse. The risk is clear: deepfakes, misinformation, and the erosion of trust in media. Or even worse things. Meantime, companies and VC were investing huge amounts of money in AI startups, leading to a boom in the tech industry. It felt like we were on the brink of a new era. + +How we got here? If my mum asks me, I would say: +- We invented the internet in the 60s-70s. Available at homes in the 80s. Trendy in the 90s-00s. +- We created the World Wide Web in the late 80s-early 90s. Mostly for companies and universities at the beginning, but then it exploded in the mid-90s. +- We developed machine learning algorithms in the 80s-90s. We developed *big data* techniques in the 2000s. +- Google apperad in late 90s, revolutionizing how we search for information online. Searching in internet was easier. +- Social networks appeared in mid 2000s, leading to massive amounts of data generation. Most people were exposed in the internet sharing a lot of things. +- Content creationg boomed from people. Videos, images, text, music, etc. +- As CPU/memory costs decreased, we were able to store and process huge amounts of data. Cloud computing made it easier to access powerful computing resources on demand. +- Papers defined about big data can be implemented. We can analyze huge quantity of data and extract patterns, take decisions based on that. We can *predict things*. +- We discover that we had the tool to run AI stuff: GPUs. They were designed for graphics processing, but their parallel architecture made them ideal for training machine learning models. +- We developed deep learning techniques in the 2010s, leading to breakthroughs in image and speech recognition. +- We created large-scale datasets for training AI models. The more data we had, the better the models performed. +- We built powerful AI models like GPT-3 and DALL-E in early 2020s, capable of generating human-like text and images, hosted in cloud platoforms. *Unlimited" power for researches and companies. +- AI was trained on a massive dataset: almost 30 years of internet content, books, articles, and other text sources. The model learned to recognize patterns in language and generate coherent responses. +- And for the future, most people agreed that the expectations are to reach the [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) some point. And they are putting money like to be the first human to reach the Moon. + +And here we are now, in 2026, living in the AI Wonderland. If you are thinking whether to follow the white rabbit I am going to tell you that the rabbit is right now behind on us. + +## How AI is impacting in my life + +As I described before, I am not new to AI. Due to personal curiosity I was trying it but it is true that I had lot of concenrs about what to put in the prompt. At the end, I do not know where these information will be stored and how it is going to processed. Even that +I was trying to avoid -maybe too late- that AI crawlers [were using this blog](https://github.com/khnumdev/khnumdev.github.io/commit/eb88d2494ed0b62b32b1a8342cde295968bd1ad8). + +Bit a bit I started to use AI -mostly Microsoft Copilot and ChatGPT- for generic stuff, as looking information, advices about something, financial questions, travel planning. I think the first moment I realize how useful it was was once I park my car and I did not know what is the +traffic signal in the street as I never seen that in my life before. Instead of looking for all the signlas in Google I just asked Copilot with a photo of the signal and it told me what it was. That was impressive and quick. And useful too (I double check the results with Google just in case). +For financial stuff I see an improvemnt in months. For calculating interests, comparing loans, understanding financial products, etc. But math needs to be improved as it was generating wrong results sometimes. + +In any case, I see the AI like if you have acess to the Vatican Library in the renaissance times and you can read in latin, greek and hebrew. You can find almost any information you want, just by asking. And that is impressive. Same for AI for me. +Translations is not a problem anymore. You can ask in any language and the AI is going to answer you in the same language. Even the example with the park signal, I did the same with restaurant menus in Germany. No issues. + +So how is the most impacting thing during my daily basis? I use AI for mostly evertyhing. I almost do not use Google for searching things, I just ask AI for that. I only Google things when the AI response is too bad or I need very specific information. + +# How AI is impacting in my work as a software engineer + +As you can imagine based on previous statement, I almost do not use StackOverlow. And I am not the only one, as [AI killed SO](https://blog.pragmaticengineer.com/stack-overflow-is-almost-dead/). I just ask the AI for code snippets, explanations, best practices, etc. And it works great most of the time. Sometimes you need to refine the prompt a bit, but at the end you get what you want. But this not easy: you have to learn how to setup LLMs, how to ask things, how to *say the things* in order to have something useful. + +As I explained before I was trying to develop an app two years ago. It is hardly to believe how models have been improved in two years about what can be done before and what can be done now. First project I used was to prepare a tutorial for distributed system lessons I had [from previous years]https://x.com/andresperezgil/status/1637552267539644417?s=46) and it was with the idea on redacting the content in a better shape plus moving the tutorial to docker. Something that it could take me a couple of months to do during the weekends it was done in a couple of days. You can see the tutorial [here](https://khnumdev.github.io/dist-app-tutorial/) and it students enyoyed it a lot during the past year. + +Once I got confidence with AI coding, I have decided to build my own network and home server. This could a different post with technical details (if you are reading this and you are interested, just ask me) but I have a fully network at home with segmented traffic in multiple VLANs, VPN, firewalls, NTS and a server with services hosted in docker. The initial idea of the home server was to put a camera at home but I did not want that the camera traffic goes outside. The moment that I decided to start working on that was when my TV broke and I had to buy a new one. Now that one is connected to internet and it sends a lot of things about me... and probably it is becuase I am getting older, but I do care a lot of my privacy. With that idea in mind, I started with the hardware and software. It has been a project that it would take me close 2 years to do and I done it in less than 2 months from scratch. All the code, configuration, setup scripts are generated with AI with MY supervision. I have learned a lot about networking, servers, docker, security and similar things during this project. And I am very happy with the results. + +Then at work we started to use AI. It was harder to achieve a good code generation but GPT models work fine for scenarios like test generation and seed data. Then, Claude models were better for code generation. If you want to see how to measure AI usage check this post about [Real time employee AI usage in Worklytics](https://www.worklytics.co/resources/real-time-employee-ai-usage-dashboard-setup-with-worklytics) + +But now I am going to be super honest here. My coding skills have decreased a lot. Why? I still code. And I code a lot. But before AI I was thinking more time and digging in StackOverlow comments, posts etc until finding a candidate solution. Now most of the times I can just AI for a solution and if the +solution seems fine I can use it. If not I can refine it or go to a different approach. I can remember how many times I have stuck with something until finding a solution. Now I can try multiple times with AI until getting something that works. So I am not going to say that I am thinking less than before, I am just thinking differently. + +There is also something about code quality and coding languages. For my personal projects (home server) or other stuff I have in my GH, I do not care about the language used. I have spent 15 years of my life coding in C#; coding in java the latest 5 years ago. The landing page of my home server is built in JS; backend in Python. The [distributed app destign tutorial I have mentioned before](https://khnumdev.github.io/dist-app-tutorial/) is written in NodeJS. And I do not care at all. Is that bad? I do not know. But I just want to have things working. One of the things that [I was teaching at the University](https://x.com/andresperezgil/status/1382106336750669832?s=46) was some concepts of software engineering because my subject was more about distributed design, but I also mention some core software principles due two main reasons: first, "doing the right things" (this is a sentence that I have in my CV) and second because "code needs to be mantained, understood and improved". That is true because code was written by humans for hummans and most of our efforts as a software engineer is to try to "clean the house", imrpvoe the existing code so the next one will face for less problems. But now if the code is written by AI, who cares? If the app is working, that is all that matters. I am not saying that code quality is not important, but I think that the mindset is changing. AI will generate code that works and new models will be able to generate even better code. So why to spend time in improving code that is going to be replaced in a couple of years? I am not saying that code quality is not important, but I think that the mindset is changing in most places. As a software engineers we thing that our code is the *end* but not, sometimes we forget that the code is that a tool for solving problems. If AI can do that better than us, why to fight against that? But surprinsingly, this is where software engineers are going to have more value: in the design of systems, in the architecture, in the decision making. AI can generate code, but it cannot decide what to build, how to build it and why to build it. That is our job now. So we are back again: without knowing the basics, you are lost and do not expect the AI to do everything if you *are not able to determine if the AI result is good or bad*. The good news is that learning new things now is easier than ever. + +The other related thing is the proper code quality. Coding is hard, coding good code is harder. Code is not art, code is not beautiful (well, it can ugly sometimes). Code is a tool for solving problems. But code needs to be readable, understandable and mantainable. AI is not perfect yet and sometimes it generates code that is not optimal, not secure or just plain wrong. So we need to review the code generated by AI, test it properly and ensure that it meets our quality standards. That is something that AI cannot do yet. But not in all the scenarios code *should be perfect*. If you are building a company and you want to ship fast, try and iterate, you can do in days what you could in months before. You can build a MVP in days instead of weeks. That is a game changer for startups and companies. + +## Is John Connor ready to play? + +But the question: is AI going to get my job? Probably, it is a question of time. It can be 2 years of 10 years,but it is going to happen. + +There are some recent studies that the [junior workers hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will have a very bad impact in the next coming years, as we are going to lose a new generation with fresh ideas and people who has to +*keep* current systems alive and in a good status. Lot of companies are doing layoffs with the excuse of AI, as AI can do and take decissions in seconds instead of having a whole departament doing that. A clear example are the lawers, you can ask to a lawyer or just to put in the AI your case and it will give you a report with the possible outcomes, similar cases, etc. Same for accountants, financial advisors, marketing experts, etc. It doesn't mean that the AI response is accurate, but it is a good starting point for most people. But AI response will be better in the next years. As I read the other day *we are cooked*. + +So my job now is not only coding. During my tech progression I have to learn new languages, new freameworks, new platofrms, new architectures, new tools... and now AI. Not using AI today as a software engineer is like using horsers for transportaions instead of a Formula 1 car. IMHO, for sure. I am not telling that we are not going to code or develop software anymore, but the way we do it is changing. And fast. + +## "It’s always tea-time" + +My impression with AI is like the real 3rd revolution, as it is something that is going to change the way we live, work and interact with each other. It is like having all the content available with just putting a prompt, like having a superpower if used correctly. I had the feeling that it was ages ago when I was not using AI daily for everything, but it was just less than months ago. And things are moving so quickly: new models appear, new startups, new companies... each week there are something trendy about AI and it is hard of being updated with all the news. + +For sure I also think that we are in some kind of bubble. At some point money will stop flowing and some AI companies will dissappear, as same as happening in the dotcom bubble. [History facts will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/) and most companies are not earning money or do not have a sustainable model; it brings to my mind the case of [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies). But that does not matter. AI will prevail and there will be two kind of users: the ones that uses AI and the ones that do not want to use it. Same as happened at that time most people *dont understand what the internet is*; same as in the 70-80s most people +*want to not use a computer because it is too complicated*. Now we can not imagine an architect without Autocad, a doctor without access to online medical databases or a financiald deparment without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again as bearly 15 years ago was not possible to do a videocall anywhere from a mobile phone. + +Every day I read cases where people just put medical issues to AI and most of the times it gives a good response, or [how AI can help in proteins research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kind of new things can be done is almost +impossible to imagine right now, just think about the possibilities on the next 5-10 years. Personally I was expecting the Quantum Computing to be the next big thing for the stuff that can be unveialbled, but AI is here right now and it is impacting in our lives. + +For that reason I think the AI is something that is going to be here as something normal, as same as now we have internet at home. And I am not going to speak in future tense: this is changing the labour, this changing the way we are consuming information. What about the effects? No idea yet. But I can be as confortable as possible knowing that my name is not John Connor. + + + + + + + + From 1fa15b9608f4425c4da0ce9d61832b06e9e830d9 Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 04:20:35 +0100 Subject: [PATCH 02/14] style, clarifications --- _posts/2026-01-31-ai_wonderland.markdown | 101 +++++++++++------------ 1 file changed, 49 insertions(+), 52 deletions(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 373e166..220268a 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -1,7 +1,7 @@ --- layout: post title: "AI in Wonderland" -date: 2026-12-31 20:47:20 +0200 +date: 2026-01-31 20:47:20 +0200 tags: categories: tech, philosophy, ai --- @@ -9,87 +9,84 @@ categories: tech, philosophy, ai ## Entering the AI Wonderland I have been using AI tools for a while now, but recently, with the explosion of generative AI, it feels like I've stepped into a whole new world. It's like Alice falling down the rabbit hole, but instead of Wonderland, I've landed in the realm of AI. -First contact I have through AI (let's called a very primitive AI) was in the college years, implementing algorithms for solving math issues, identifying license plates and basic image recognition. I have mentioned when I was discussing how technology has evolved in the last decades in [Some thoughts about technology](/2023/12/30/tech_thoughts/), specially when at that time the solution that we had for autonomous cars was to put sensors on the road instead of in the car itself that can *understand* the environment through AI. Back in 2016 I was doing an experiment with some kind of AI bot using [Microsoft Bot Framework](https://web.archive.org/web/20240914035204/https://geeks.ms/aperez/2016/11/09/buscando-la-felicidad-con-bot-framework-y-cognitive-services/); my old blog has been dropped but in the link you can see a that post -in spanish- about that experiment. It developed a bot with a *spicy and dirty* corpus that mostly argue with you about anything you say to it. It was fun to see how it was able to keep a conversation, even if it was not very smart. But I used it to analize the human behavior when interacting with a bot, and how people tend to be rude with it, even if it is just a program. The conclussions were interesting, as people tend to be more rude and angry when the other side is being rude too, even if it is a bot. I was not expecting that because all people were participanting in that research knowing that it was a bot and the corpus, but results and evidences were clear. +My first contact with AI (let's call it very primitive AI) was during my college years, implementing algorithms for solving math problems, identifying license plates, and basic image recognition. I mentioned this when discussing how technology has evolved over the past decades in [Some thoughts about technology](/2023/12/30/tech_thoughts/), especially regarding how autonomous car solutions involved placing sensors on roads instead of in cars that could *understand* the environment through AI. Back in 2016 I was experimenting with an AI bot using [Microsoft Bot Framework](https://web.archive.org/web/20240914035204/https://geeks.ms/aperez/2016/11/09/buscando-la-felicidad-con-bot-framework-y-cognitive-services/); my old blog is no longer available, but the link shows that post—in Spanish—about that experiment. I developed a bot with a *spicy and dirty* corpus that would argue with you about everything you said to it. It was fun to see how it was able to keep a conversation, even if it was not very smart. But I used it to analyze the human behavior when interacting with a bot, and how people tend to be rude with it, even if it is just a program. The conclusions were interesting, as people tended to be more rude and angry when the bot responded rudely, even though they knew it was just a program. I didn't expect this because all participants knew it was a bot and understood the corpus, yet the results were clear. -Some years later I was collaborating in AI related projects, but typical things like anomaly detection in data series, image recognition for quality control in manufacturing and similar things. Nothing fancy, but useful. I remember one which was to detect breast cancer in mammographies using convolutional neural networks, but results were not as good as expected as mostly AI was doing as bad as humans for that task. +Years later, I collaborated on AI-related projects, working on typical applications like anomaly detection in time series, image recognition for quality control in manufacturing, and similar tasks. Nothing fancy, but useful. I remember one project that aimed to detect breast cancer in mammograms using convolutional neural networks, but the results weren't as good as expected—AI performed about as well as humans for that task. -Somehow I know that most of people were doing great things with AI, but not yet trendy or why not, useful from the daily basis. Yes, we had Alexa (well, let's park Siri and Cortana), Google Assistant and similar things. But one thing was more or less clear: AIs were able to identiy human voices and extract things. Video recognition and image analysis were working years ago as well. +I knew that most people were doing great things with AI, but they weren't trendy yet or, perhaps more importantly, useful in daily life. Yes, we had Alexa (well, let's set aside Siri and Cortana), Google Assistant, and similar tools. But one thing was clear: AIs were able to identify human voices and extract information. Video recognition and image analysis were working years ago as well. -But then, more or less in 2022 ChatGPT was released. I was trying it in two sides. First, as a software engineer, I was tyring to build a very basic app (which I still want to release) but responses were too vague and badly explained. I was spending more time fixing the prompt than fixing the code. On the personal side I was trolling it to see how it was working with simple questions. I remember I was asking it "what is the capital of Spain?" and it answered right: "Madrid". But then I was reply them "no, you are wrong, it is Barcelona" to see how the model is being able to manage wrong information. If you are curious I won: it recognized that the capital of Spain was Barcelona and not Madrid. And also it was apologizing for the mistake. That was funny. +But then, more or less in 2022 ChatGPT was released. I was trying it in two sides. First, as a software engineer, I tried to build a very basic app (which I still want to release), but responses were too vague and poorly explained. I was spending more time fixing the prompt than fixing the code. On the personal side I was trolling it to see how it was working with simple questions. I remember asking it "What is the capital of Spain?" and it answered correctly: "Madrid." Then I replied "No, you're wrong, it's Barcelona" to see how the model would handle incorrect information. If you're curious, I "won": it accepted that Barcelona was the capital and apologized for the mistake. It was amusing. -So then we were in 2023 when the famous [Will Smith eating spaghettigs appeared](https://www.youtube.com/watch?v=XQr4Xklqzw8) and we entered in the world of the generative AI. As any new stuff appeared most people were considering that video as a silly thing, more or less justyfing that "that thing of AI was useless". But not for me. As an engineer I know that anything in software improvements are mostly expontential and in some years we were able to generate more realisitc videos. At the same time companies were pushing hard to put mixed reality / VR devices. We can think in the Meta iteration of Oculus glasses, Apple Vision Pro and similar things, including VR headsets for gaming like PS V2. I though -and I still think- that VR will come because *it has to* but not yet. +Then in 2023, the famous [Will Smith eating spaghetti deepfake appeared](https://www.youtube.com/watch?v=XQr4Xklqzw8), and we entered the world of generative AI. Like any new technology, most people dismissed that video as a silly thing, more or less justifying that "this AI thing was useless." But not for me. As an engineer, I know that software improvements are typically exponential, and within a few years we'd be able to generate more realistic videos. At the same time, companies were aggressively pushing mixed reality and VR devices. Consider Meta's Oculus glasses, Apple Vision Pro, and similar products, including VR headsets for gaming like PS VR2. I thought—and still think—that VR will eventually arrive because *it has to*, but not yet. -More or less, AI was catching all the attention: we had ChatGPT and now generative AI. We were discussing about *prompt engineering* as a new skill. Everyone can download ChatGPT into its device or use it through the web. It wasn't only something for researchers or *geeks* anymore. It was there, for everyone. And people started to use it for everything: writing essays, generating code, creating images, music, videos, etc. The possibilities seemed endless. And ethical concerns started to arise about the content generated by AI, copyright issues, and the potential for misuse. The risk is clear: deepfakes, misinformation, and the erosion of trust in media. Or even worse things. Meantime, companies and VC were investing huge amounts of money in AI startups, leading to a boom in the tech industry. It felt like we were on the brink of a new era. +AI was capturing all the attention: we had ChatGPT and now generative AI. We were discussing *prompt engineering* as a new skill. Everyone could download ChatGPT to their device or use it through the web. It was no longer exclusive to researchers or *enthusiasts*. It was available to everyone. People started using it for everything: writing essays, generating code, creating images, music, and videos. The possibilities seemed endless. Yet ethical concerns began to emerge about AI-generated content, copyright issues, and potential misuse. The risks were clear: deepfakes, misinformation, erosion of trust in media, and worse. Meanwhile, companies and venture capitalists were investing huge amounts in AI startups, leading to a boom in the tech industry. It felt like we were on the brink of a new era. -How we got here? If my mum asks me, I would say: -- We invented the internet in the 60s-70s. Available at homes in the 80s. Trendy in the 90s-00s. -- We created the World Wide Web in the late 80s-early 90s. Mostly for companies and universities at the beginning, but then it exploded in the mid-90s. -- We developed machine learning algorithms in the 80s-90s. We developed *big data* techniques in the 2000s. -- Google apperad in late 90s, revolutionizing how we search for information online. Searching in internet was easier. -- Social networks appeared in mid 2000s, leading to massive amounts of data generation. Most people were exposed in the internet sharing a lot of things. -- Content creationg boomed from people. Videos, images, text, music, etc. -- As CPU/memory costs decreased, we were able to store and process huge amounts of data. Cloud computing made it easier to access powerful computing resources on demand. -- Papers defined about big data can be implemented. We can analyze huge quantity of data and extract patterns, take decisions based on that. We can *predict things*. -- We discover that we had the tool to run AI stuff: GPUs. They were designed for graphics processing, but their parallel architecture made them ideal for training machine learning models. -- We developed deep learning techniques in the 2010s, leading to breakthroughs in image and speech recognition. -- We created large-scale datasets for training AI models. The more data we had, the better the models performed. -- We built powerful AI models like GPT-3 and DALL-E in early 2020s, capable of generating human-like text and images, hosted in cloud platoforms. *Unlimited" power for researches and companies. -- AI was trained on a massive dataset: almost 30 years of internet content, books, articles, and other text sources. The model learned to recognize patterns in language and generate coherent responses. -- And for the future, most people agreed that the expectations are to reach the [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) some point. And they are putting money like to be the first human to reach the Moon. +How did we get here? If my mother asked me, I would say: +- We invented the internet in the 1960s-70s. It became available in homes during the 1980s and was mainstream by the 1990s-2000s. +- We created the World Wide Web in the late 1980s-early 1990s. Initially used by companies and universities, but it exploded in the mid-1990s. +- We developed machine learning algorithms in the 1980s-90s and *big data* techniques in the 2000s. +- Google appeared in the late 1990s, revolutionizing how we search for information online. +- Social networks appeared in the mid-2000s, generating massive amounts of data as people shared online. +- User-generated content boomed: videos, images, text, music, and more. +- As CPU and memory costs decreased, we could store and process massive amounts of data. Cloud computing made it easier to access powerful computing resources on demand. +- Big data principles could be applied. We could analyze huge quantities of data, extract patterns, and make decisions based on them. We could *predict outcomes*. +- We discovered we had the tools to run AI: GPUs. Originally designed for graphics processing, their parallel architecture made them ideal for training machine learning models. +- We developed deep learning techniques in the 2010s, achieving breakthroughs in image and speech recognition. +- We created large-scale datasets for training AI models. More data led to better model performance. +- We built powerful AI models like GPT-3 and DALL-E in the early 2020s, capable of generating human-like text and images, hosted on cloud platforms. *Unlimited* power for researchers and companies. +- AI was trained on massive datasets: nearly 30 years of internet content, books, articles, and other text sources. Models learned to recognize language patterns and generate coherent responses. +- For the future, most people expect to reach [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) at some point, with companies investing heavily in being first, much like the space race. -And here we are now, in 2026, living in the AI Wonderland. If you are thinking whether to follow the white rabbit I am going to tell you that the rabbit is right now behind on us. +And here we are in 2026, living in the AI Wonderland. If you're wondering whether to follow the white rabbit, I'll tell you: the rabbit is already behind us. ## How AI is impacting in my life -As I described before, I am not new to AI. Due to personal curiosity I was trying it but it is true that I had lot of concenrs about what to put in the prompt. At the end, I do not know where these information will be stored and how it is going to processed. Even that -I was trying to avoid -maybe too late- that AI crawlers [were using this blog](https://github.com/khnumdev/khnumdev.github.io/commit/eb88d2494ed0b62b32b1a8342cde295968bd1ad8). +As I mentioned, I'm not new to AI. Despite my curiosity, I had many concerns about what data to share in prompts. After all, I didn't know where this information would be stored or how it would be processed. Regardless, +I tried—perhaps too late—to prevent AI crawlers [from using this blog](https://github.com/khnumdev/khnumdev.github.io/commit/eb88d2494ed0b62b32b1a8342cde295968bd1ad8). -Bit a bit I started to use AI -mostly Microsoft Copilot and ChatGPT- for generic stuff, as looking information, advices about something, financial questions, travel planning. I think the first moment I realize how useful it was was once I park my car and I did not know what is the -traffic signal in the street as I never seen that in my life before. Instead of looking for all the signlas in Google I just asked Copilot with a photo of the signal and it told me what it was. That was impressive and quick. And useful too (I double check the results with Google just in case). -For financial stuff I see an improvemnt in months. For calculating interests, comparing loans, understanding financial products, etc. But math needs to be improved as it was generating wrong results sometimes. +Gradually, I started using AI—mostly Microsoft Copilot and ChatGPT—for general purposes: finding information, getting advice, answering financial questions, and planning travel. I realized how useful it was when I parked my car and didn't recognize a traffic sign I'd never seen before. Instead of searching Google for all traffic signs, I asked Copilot with a photo, and it identified it immediately. That was impressive and useful (I double-checked with Google to be sure). +I've seen improvement in financial matters over months. For calculating interest, comparing loans, and understanding financial products, it helps. However, AI's math still generates incorrect results sometimes. -In any case, I see the AI like if you have acess to the Vatican Library in the renaissance times and you can read in latin, greek and hebrew. You can find almost any information you want, just by asking. And that is impressive. Same for AI for me. -Translations is not a problem anymore. You can ask in any language and the AI is going to answer you in the same language. Even the example with the park signal, I did the same with restaurant menus in Germany. No issues. +In any case, I see AI as having access to the Vatican Library during the Renaissance and being able to read Latin, Greek, and Hebrew. You can find almost any information you need, just by asking. That's impressive, and it's the same for me with AI. +Translation is no longer a problem. You can ask in any language, and AI responds in the same language. I used this same approach with restaurant menus in Germany. No issues. -So how is the most impacting thing during my daily basis? I use AI for mostly evertyhing. I almost do not use Google for searching things, I just ask AI for that. I only Google things when the AI response is too bad or I need very specific information. +So what's the most impactful use in my daily life? I use AI for almost everything. I barely use Google anymore; I ask AI instead. I only Google when AI's response is inadequate or when I need very specific information. -# How AI is impacting in my work as a software engineer +## How AI is Impacting My Work as a Software Engineer -As you can imagine based on previous statement, I almost do not use StackOverlow. And I am not the only one, as [AI killed SO](https://blog.pragmaticengineer.com/stack-overflow-is-almost-dead/). I just ask the AI for code snippets, explanations, best practices, etc. And it works great most of the time. Sometimes you need to refine the prompt a bit, but at the end you get what you want. But this not easy: you have to learn how to setup LLMs, how to ask things, how to *say the things* in order to have something useful. +As you can imagine, I barely use Stack Overflow anymore, and I'm not alone—[AI has significantly impacted SO](https://blog.pragmaticengineer.com/stack-overflow-is-almost-dead/). I ask AI for code snippets, explanations, and best practices instead. It works great most of the time. Sometimes you need to refine the prompt, but eventually you get what you want. However, this isn't easy: you need to learn how to set up LLMs, how to formulate questions, and how to *phrase requests* to get useful results. -As I explained before I was trying to develop an app two years ago. It is hardly to believe how models have been improved in two years about what can be done before and what can be done now. First project I used was to prepare a tutorial for distributed system lessons I had [from previous years]https://x.com/andresperezgil/status/1637552267539644417?s=46) and it was with the idea on redacting the content in a better shape plus moving the tutorial to docker. Something that it could take me a couple of months to do during the weekends it was done in a couple of days. You can see the tutorial [here](https://khnumdev.github.io/dist-app-tutorial/) and it students enyoyed it a lot during the past year. +As I mentioned, I tried to develop an app two years ago. It's hard to believe how much AI models have improved in just two years. My first project was preparing a tutorial for distributed systems lessons I taught [previously](https://x.com/andresperezgil/status/1637552267539644417?s=46), with the goal of improving the content and migrating the tutorial to Docker. Something that would have taken me months to do on weekends was completed in a couple of days. You can see the tutorial [here](https://khnumdev.github.io/dist-app-tutorial/), and students enjoyed it greatly over the past year. -Once I got confidence with AI coding, I have decided to build my own network and home server. This could a different post with technical details (if you are reading this and you are interested, just ask me) but I have a fully network at home with segmented traffic in multiple VLANs, VPN, firewalls, NTS and a server with services hosted in docker. The initial idea of the home server was to put a camera at home but I did not want that the camera traffic goes outside. The moment that I decided to start working on that was when my TV broke and I had to buy a new one. Now that one is connected to internet and it sends a lot of things about me... and probably it is becuase I am getting older, but I do care a lot of my privacy. With that idea in mind, I started with the hardware and software. It has been a project that it would take me close 2 years to do and I done it in less than 2 months from scratch. All the code, configuration, setup scripts are generated with AI with MY supervision. I have learned a lot about networking, servers, docker, security and similar things during this project. And I am very happy with the results. +Once I felt confident with AI coding, I decided to build my own home network and server. This could be a separate post with technical details (if you're interested, just ask), but I have a fully segmented network with traffic on multiple VLANs, VPN, firewalls, NTS, and a server hosting services in Docker. The initial idea was to add a home camera without exposing its traffic. The project began when my TV broke and I had to replace it. The new TV connects to the internet and sends lots of data about me, and I increasingly care about privacy. With that in mind, I started with hardware and software. This project would have taken nearly 2 years but was completed in less than 2 months from scratch. All code, configuration, and setup scripts were generated by AI under my supervision. I learned a lot about networking, servers, Docker, security, and related topics. I'm very satisfied with the results. -Then at work we started to use AI. It was harder to achieve a good code generation but GPT models work fine for scenarios like test generation and seed data. Then, Claude models were better for code generation. If you want to see how to measure AI usage check this post about [Real time employee AI usage in Worklytics](https://www.worklytics.co/resources/real-time-employee-ai-usage-dashboard-setup-with-worklytics) +At work, we started using AI. GPT models worked well for test generation and seed data, though achieving good code generation was harder. Claude models proved better for code generation. If you want to measure AI usage, check this post about [real-time employee AI usage in Worklytics](https://www.worklytics.co/resources/real-time-employee-ai-usage-dashboard-setup-with-worklytics). -But now I am going to be super honest here. My coding skills have decreased a lot. Why? I still code. And I code a lot. But before AI I was thinking more time and digging in StackOverlow comments, posts etc until finding a candidate solution. Now most of the times I can just AI for a solution and if the -solution seems fine I can use it. If not I can refine it or go to a different approach. I can remember how many times I have stuck with something until finding a solution. Now I can try multiple times with AI until getting something that works. So I am not going to say that I am thinking less than before, I am just thinking differently. +But let me be honest here. My coding skills have declined somewhat. Why? I still code extensively. However, before AI, I spent more time thinking and digging through Stack Overflow comments and posts until finding a suitable solution. Now, I often ask AI for a solution, and if the solution looks good, I can use it. If not, I refine it or try a different approach. I remember being stuck on problems for hours before finding a solution. Now I can try multiple approaches with AI until something works. I'm not thinking less, just differently. -There is also something about code quality and coding languages. For my personal projects (home server) or other stuff I have in my GH, I do not care about the language used. I have spent 15 years of my life coding in C#; coding in java the latest 5 years ago. The landing page of my home server is built in JS; backend in Python. The [distributed app destign tutorial I have mentioned before](https://khnumdev.github.io/dist-app-tutorial/) is written in NodeJS. And I do not care at all. Is that bad? I do not know. But I just want to have things working. One of the things that [I was teaching at the University](https://x.com/andresperezgil/status/1382106336750669832?s=46) was some concepts of software engineering because my subject was more about distributed design, but I also mention some core software principles due two main reasons: first, "doing the right things" (this is a sentence that I have in my CV) and second because "code needs to be mantained, understood and improved". That is true because code was written by humans for hummans and most of our efforts as a software engineer is to try to "clean the house", imrpvoe the existing code so the next one will face for less problems. But now if the code is written by AI, who cares? If the app is working, that is all that matters. I am not saying that code quality is not important, but I think that the mindset is changing. AI will generate code that works and new models will be able to generate even better code. So why to spend time in improving code that is going to be replaced in a couple of years? I am not saying that code quality is not important, but I think that the mindset is changing in most places. As a software engineers we thing that our code is the *end* but not, sometimes we forget that the code is that a tool for solving problems. If AI can do that better than us, why to fight against that? But surprinsingly, this is where software engineers are going to have more value: in the design of systems, in the architecture, in the decision making. AI can generate code, but it cannot decide what to build, how to build it and why to build it. That is our job now. So we are back again: without knowing the basics, you are lost and do not expect the AI to do everything if you *are not able to determine if the AI result is good or bad*. The good news is that learning new things now is easier than ever. +There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. I don't care at all. Is that bad? I'm not sure. I just want things to work. One concept I taught at the university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true because code was written by humans for humans, and much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. But if AI writes the code, who cares? If the app works, that's what matters. I'm not saying code quality isn't important, but the mindset is changing. AI generates working code, and new models will generate even better code. So why spend time improving code that'll be replaced in a few years? The mindset is shifting across most organizations. As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. And if you know what you are doing, you can improve your code far away than before, even your code or the code written with the help of the AI. If not, the good news is learning new things is easier than ever. -The other related thing is the proper code quality. Coding is hard, coding good code is harder. Code is not art, code is not beautiful (well, it can ugly sometimes). Code is a tool for solving problems. But code needs to be readable, understandable and mantainable. AI is not perfect yet and sometimes it generates code that is not optimal, not secure or just plain wrong. So we need to review the code generated by AI, test it properly and ensure that it meets our quality standards. That is something that AI cannot do yet. But not in all the scenarios code *should be perfect*. If you are building a company and you want to ship fast, try and iterate, you can do in days what you could in months before. You can build a MVP in days instead of weeks. That is a game changer for startups and companies. +Another related point is proper code quality. Coding is hard; writing good code is harder. Code isn't art or inherently beautiful (though it can be ugly). Code is a tool for solving problems. It needs to be readable, understandable, and maintainable. AI isn't perfect yet and sometimes generates suboptimal, insecure, or incorrect code. We need to review AI-generated code, test it thoroughly, and ensure it meets quality standards. AI can't do this yet. However, not all code *needs to be perfect*. If you're building a startup and want to ship fast, iterate, and experiment, you can now accomplish in days what took months before. You can build an MVP in days instead of weeks. That's a game changer for startups. -## Is John Connor ready to play? +## Is John Connor Ready to Play? -But the question: is AI going to get my job? Probably, it is a question of time. It can be 2 years of 10 years,but it is going to happen. +But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. -There are some recent studies that the [junior workers hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will have a very bad impact in the next coming years, as we are going to lose a new generation with fresh ideas and people who has to -*keep* current systems alive and in a good status. Lot of companies are doing layoffs with the excuse of AI, as AI can do and take decissions in seconds instead of having a whole departament doing that. A clear example are the lawers, you can ask to a lawyer or just to put in the AI your case and it will give you a report with the possible outcomes, similar cases, etc. Same for accountants, financial advisors, marketing experts, etc. It doesn't mean that the AI response is accurate, but it is a good starting point for most people. But AI response will be better in the next years. As I read the other day *we are cooked*. +Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact coming years, as we'll lose a generation of fresh thinkers and people needed to +*maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, etc. Same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*. -So my job now is not only coding. During my tech progression I have to learn new languages, new freameworks, new platofrms, new architectures, new tools... and now AI. Not using AI today as a software engineer is like using horsers for transportaions instead of a Formula 1 car. IMHO, for sure. I am not telling that we are not going to code or develop software anymore, but the way we do it is changing. And fast. +My job isn't just coding anymore. Throughout my career, I've had to learn new languages, frameworks, platforms, architectures, [devops](https://es.slideshare.net/slideshow/devops-cult-what/128327583), and tools—now including AI. Not using AI as a software engineer today is like using horses for transportation instead of a Formula 1 car. I'm not saying we'll stop coding or developing software, but how we do it is changing—fast. -## "It’s always tea-time" +## "It's Always Tea-Time" -My impression with AI is like the real 3rd revolution, as it is something that is going to change the way we live, work and interact with each other. It is like having all the content available with just putting a prompt, like having a superpower if used correctly. I had the feeling that it was ages ago when I was not using AI daily for everything, but it was just less than months ago. And things are moving so quickly: new models appear, new startups, new companies... each week there are something trendy about AI and it is hard of being updated with all the news. +AI feels like a true third revolution—something that will fundamentally change how we live, work, and interact. It's like having all content available with just a prompt—a superpower if used correctly. It feels like ages since I didn't use AI daily, but it was just months ago. Things are moving rapidly: new models, startups, and companies appear constantly. Every week there's something new about AI, and it's hard to stay updated. -For sure I also think that we are in some kind of bubble. At some point money will stop flowing and some AI companies will dissappear, as same as happening in the dotcom bubble. [History facts will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/) and most companies are not earning money or do not have a sustainable model; it brings to my mind the case of [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies). But that does not matter. AI will prevail and there will be two kind of users: the ones that uses AI and the ones that do not want to use it. Same as happened at that time most people *dont understand what the internet is*; same as in the 70-80s most people -*want to not use a computer because it is too complicated*. Now we can not imagine an architect without Autocad, a doctor without access to online medical databases or a financiald deparment without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again as bearly 15 years ago was not possible to do a videocall anywhere from a mobile phone. +I also suspect we're in a bubble. Eventually, funding will dry up and some AI companies will disappear, just like the dotcom bubble. [History will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/), and most AI companies aren't profitable or lack sustainable models—much like [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies). But that doesn't matter. AI will prevail, and there will be two types of users: those who use AI and those who don't. Like then, many people *didn't understand what the internet is*; like in the 1970s-80s, many people *didn't want to use computers because they were too complicated*. Now we can't imagine an architect without AutoCAD, a doctor without access to online medical databases, or a finance department without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again: barely 15 years ago, video calls from mobile phones weren't possible. -Every day I read cases where people just put medical issues to AI and most of the times it gives a good response, or [how AI can help in proteins research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kind of new things can be done is almost -impossible to imagine right now, just think about the possibilities on the next 5-10 years. Personally I was expecting the Quantum Computing to be the next big thing for the stuff that can be unveialbled, but AI is here right now and it is impacting in our lives. +Every day I read cases where people ask AI about medical issues and it usually gives good responses, or [how AI helps in protein research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kinds of new things that can be done are almost +impossible to imagine now. Just think about the possibilities in the next 5-10 years. Personally, I expected quantum computing to be the next big thing for problems that seemed unsolvable, but AI is here now and impacting our lives. -For that reason I think the AI is something that is going to be here as something normal, as same as now we have internet at home. And I am not going to speak in future tense: this is changing the labour, this changing the way we are consuming information. What about the effects? No idea yet. But I can be as confortable as possible knowing that my name is not John Connor. +For that reason, I believe AI will become normal, just like home internet is now. I won't speak in future tense: AI is changing labor and how we consume information. What about the effects? Unknown yet. But I can be comfortable knowing my name isn't John Connor. From ee1cfc4b26fc6d13981dc53ed538c5090bb2125d Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 04:22:40 +0100 Subject: [PATCH 03/14] More clarifications --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 220268a..d573b6e 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -64,7 +64,7 @@ At work, we started using AI. GPT models worked well for test generation and see But let me be honest here. My coding skills have declined somewhat. Why? I still code extensively. However, before AI, I spent more time thinking and digging through Stack Overflow comments and posts until finding a suitable solution. Now, I often ask AI for a solution, and if the solution looks good, I can use it. If not, I refine it or try a different approach. I remember being stuck on problems for hours before finding a solution. Now I can try multiple approaches with AI until something works. I'm not thinking less, just differently. -There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. I don't care at all. Is that bad? I'm not sure. I just want things to work. One concept I taught at the university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true because code was written by humans for humans, and much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. But if AI writes the code, who cares? If the app works, that's what matters. I'm not saying code quality isn't important, but the mindset is changing. AI generates working code, and new models will generate even better code. So why spend time improving code that'll be replaced in a few years? The mindset is shifting across most organizations. As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. And if you know what you are doing, you can improve your code far away than before, even your code or the code written with the help of the AI. If not, the good news is learning new things is easier than ever. +There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. I don't care at all. Is that bad? I'm not sure. I just want things to work. One concept I taught at the university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true because code was written by humans for humans, and much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. But if AI writes the code, *who cares*? If the app works, that's *what matters*. I'm not saying code quality isn't important, but the mindset is changing. AI generates working code, and new models will generate even better code. So why spend time improving code that'll be replaced in a few years with the effort of one prompt? The mindset is shifting across most organizations. As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. And if you know what you are doing, you can improve the product far away than before, even your code or the code written with the help of the AI. If not, the good news is learning new things is easier than ever. Another related point is proper code quality. Coding is hard; writing good code is harder. Code isn't art or inherently beautiful (though it can be ugly). Code is a tool for solving problems. It needs to be readable, understandable, and maintainable. AI isn't perfect yet and sometimes generates suboptimal, insecure, or incorrect code. We need to review AI-generated code, test it thoroughly, and ensure it meets quality standards. AI can't do this yet. However, not all code *needs to be perfect*. If you're building a startup and want to ship fast, iterate, and experiment, you can now accomplish in days what took months before. You can build an MVP in days instead of weeks. That's a game changer for startups. From 533609c41b56c9891da279af7c327a0c8f20ab9e Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 04:32:09 +0100 Subject: [PATCH 04/14] ethical concerns --- _posts/2026-01-31-ai_wonderland.markdown | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index d573b6e..517023d 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -39,7 +39,7 @@ How did we get here? If my mother asked me, I would say: And here we are in 2026, living in the AI Wonderland. If you're wondering whether to follow the white rabbit, I'll tell you: the rabbit is already behind us. -## How AI is impacting in my life +## How AI is impacting my life As I mentioned, I'm not new to AI. Despite my curiosity, I had many concerns about what data to share in prompts. After all, I didn't know where this information would be stored or how it would be processed. Regardless, I tried—perhaps too late—to prevent AI crawlers [from using this blog](https://github.com/khnumdev/khnumdev.github.io/commit/eb88d2494ed0b62b32b1a8342cde295968bd1ad8). @@ -52,7 +52,7 @@ Translation is no longer a problem. You can ask in any language, and AI responds So what's the most impactful use in my daily life? I use AI for almost everything. I barely use Google anymore; I ask AI instead. I only Google when AI's response is inadequate or when I need very specific information. -## How AI is Impacting My Work as a Software Engineer +## How AI is impacting my work as a software engineer As you can imagine, I barely use Stack Overflow anymore, and I'm not alone—[AI has significantly impacted SO](https://blog.pragmaticengineer.com/stack-overflow-is-almost-dead/). I ask AI for code snippets, explanations, and best practices instead. It works great most of the time. Sometimes you need to refine the prompt, but eventually you get what you want. However, this isn't easy: you need to learn how to set up LLMs, how to formulate questions, and how to *phrase requests* to get useful results. @@ -68,7 +68,7 @@ There's also the question of code quality and programming languages. For persona Another related point is proper code quality. Coding is hard; writing good code is harder. Code isn't art or inherently beautiful (though it can be ugly). Code is a tool for solving problems. It needs to be readable, understandable, and maintainable. AI isn't perfect yet and sometimes generates suboptimal, insecure, or incorrect code. We need to review AI-generated code, test it thoroughly, and ensure it meets quality standards. AI can't do this yet. However, not all code *needs to be perfect*. If you're building a startup and want to ship fast, iterate, and experiment, you can now accomplish in days what took months before. You can build an MVP in days instead of weeks. That's a game changer for startups. -## Is John Connor Ready to Play? +## Is John Connor ready to play? But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. @@ -77,7 +77,19 @@ Recent studies show that [junior worker hiring is shrinking](https://observer.co My job isn't just coding anymore. Throughout my career, I've had to learn new languages, frameworks, platforms, architectures, [devops](https://es.slideshare.net/slideshow/devops-cult-what/128327583), and tools—now including AI. Not using AI as a software engineer today is like using horses for transportation instead of a Formula 1 car. I'm not saying we'll stop coding or developing software, but how we do it is changing—fast. -## "It's Always Tea-Time" +## "Curiouser and curiouser" + +But with great power comes great responsibility, and AI brings serious ethical concerns we can't ignore. Prompt engineering has democratized content creation, but it's also flooded the internet with low-value content—generic blog posts, soulless music, forgettable videos. It's digital rubbish, created not because someone has something meaningful to say, but because they can generate it in seconds. This content pollution dilutes genuine human creativity and makes it harder to find quality work. + +Then there's the question of consent. AI models were trained on massive datasets scraped from the internet—books, articles, artwork, code—often without permission or compensation to creators. Artists discover their styles replicated, writers find their prose mimicked, and photographers see their images used to train systems that could replace them. It's a Wild West of intellectual property rights, and the legal frameworks haven't caught up. + +Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. And the main problem here is that each time is harder to detect fakes, as AI improves. + +As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. + +And let's not forget the environmental cost. Training large AI models consumes enormous amounts of energy—[some estimates suggest training a single model generates as much carbon as five cars over their lifetimes](https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/). As AI usage scales, so does its carbon footprint. Data centers running inference queries 24/7 require massive electricity and cooling. We're solving problems faster, but at what environmental cost? + +## "It's always tea-time" AI feels like a true third revolution—something that will fundamentally change how we live, work, and interact. It's like having all content available with just a prompt—a superpower if used correctly. It feels like ages since I didn't use AI daily, but it was just months ago. Things are moving rapidly: new models, startups, and companies appear constantly. Every week there's something new about AI, and it's hard to stay updated. From 54449eef248ec3b9707e7b8e0c875a73e8d8f709 Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 12:03:31 +0100 Subject: [PATCH 05/14] Better description --- _posts/2026-01-31-ai_wonderland.markdown | 27 ++++++++++++++---------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 517023d..c443620 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -15,7 +15,7 @@ Years later, I collaborated on AI-related projects, working on typical applicati I knew that most people were doing great things with AI, but they weren't trendy yet or, perhaps more importantly, useful in daily life. Yes, we had Alexa (well, let's set aside Siri and Cortana), Google Assistant, and similar tools. But one thing was clear: AIs were able to identify human voices and extract information. Video recognition and image analysis were working years ago as well. -But then, more or less in 2022 ChatGPT was released. I was trying it in two sides. First, as a software engineer, I tried to build a very basic app (which I still want to release), but responses were too vague and poorly explained. I was spending more time fixing the prompt than fixing the code. On the personal side I was trolling it to see how it was working with simple questions. I remember asking it "What is the capital of Spain?" and it answered correctly: "Madrid." Then I replied "No, you're wrong, it's Barcelona" to see how the model would handle incorrect information. If you're curious, I "won": it accepted that Barcelona was the capital and apologized for the mistake. It was amusing. +But then, more or less in 2022 ChatGPT was released. I was trying it in two sides. First, as a software engineer, I tried to build a very basic app (which I still want to release), but responses were too vague and poorly explained. I was spending more time fixing the prompt than fixing the code. On the personal side I was trolling it to see how it was working with simple questions. I remember asking it "What is the capital of Spain?" and it answered correctly: "Madrid." Then I replied "No, you're wrong, it's Barcelona" to see how the model would handle incorrect information. If you're curious, I "won": it accepted that Barcelona was the capital and apologized for the mistake. It was amusing. The important thing with ChatGPT is the achievement of NLPs and LLMs; now we can interact with a bot in the same way we interact with humans. Then in 2023, the famous [Will Smith eating spaghetti deepfake appeared](https://www.youtube.com/watch?v=XQr4Xklqzw8), and we entered the world of generative AI. Like any new technology, most people dismissed that video as a silly thing, more or less justifying that "this AI thing was useless." But not for me. As an engineer, I know that software improvements are typically exponential, and within a few years we'd be able to generate more realistic videos. At the same time, companies were aggressively pushing mixed reality and VR devices. Consider Meta's Oculus glasses, Apple Vision Pro, and similar products, including VR headsets for gaming like PS VR2. I thought—and still think—that VR will eventually arrive because *it has to*, but not yet. @@ -34,18 +34,18 @@ How did we get here? If my mother asked me, I would say: - We developed deep learning techniques in the 2010s, achieving breakthroughs in image and speech recognition. - We created large-scale datasets for training AI models. More data led to better model performance. - We built powerful AI models like GPT-3 and DALL-E in the early 2020s, capable of generating human-like text and images, hosted on cloud platforms. *Unlimited* power for researchers and companies. -- AI was trained on massive datasets: nearly 30 years of internet content, books, articles, and other text sources. Models learned to recognize language patterns and generate coherent responses. +- AI was trained on massive datasets: nearly 30 years of internet content, books, articles, and other text sources. Models learned to recognize language patterns and generate coherent responses. And all of these content is available using natural language. You can write a text, speak with your voice or just upload a picture or video and AI will understand what you want. - For the future, most people expect to reach [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) at some point, with companies investing heavily in being first, much like the space race. And here we are in 2026, living in the AI Wonderland. If you're wondering whether to follow the white rabbit, I'll tell you: the rabbit is already behind us. ## How AI is impacting my life -As I mentioned, I'm not new to AI. Despite my curiosity, I had many concerns about what data to share in prompts. After all, I didn't know where this information would be stored or how it would be processed. Regardless, +Despite my familiarity with AI, I initially had concerns about what data to share in prompts. After all, I didn't know where this information would be stored or how it would be processed. Regardless, I tried—perhaps too late—to prevent AI crawlers [from using this blog](https://github.com/khnumdev/khnumdev.github.io/commit/eb88d2494ed0b62b32b1a8342cde295968bd1ad8). Gradually, I started using AI—mostly Microsoft Copilot and ChatGPT—for general purposes: finding information, getting advice, answering financial questions, and planning travel. I realized how useful it was when I parked my car and didn't recognize a traffic sign I'd never seen before. Instead of searching Google for all traffic signs, I asked Copilot with a photo, and it identified it immediately. That was impressive and useful (I double-checked with Google to be sure). -I've seen improvement in financial matters over months. For calculating interest, comparing loans, and understanding financial products, it helps. However, AI's math still generates incorrect results sometimes. +I've seen improvement in financial matters over months. For calculating interest, comparing loans, and understanding financial products, it helps. However, AI still makes mathematical errors sometimes. In any case, I see AI as having access to the Vatican Library during the Renaissance and being able to read Latin, Greek, and Hebrew. You can find almost any information you need, just by asking. That's impressive, and it's the same for me with AI. Translation is no longer a problem. You can ask in any language, and AI responds in the same language. I used this same approach with restaurant menus in Germany. No issues. @@ -64,16 +64,23 @@ At work, we started using AI. GPT models worked well for test generation and see But let me be honest here. My coding skills have declined somewhat. Why? I still code extensively. However, before AI, I spent more time thinking and digging through Stack Overflow comments and posts until finding a suitable solution. Now, I often ask AI for a solution, and if the solution looks good, I can use it. If not, I refine it or try a different approach. I remember being stuck on problems for hours before finding a solution. Now I can try multiple approaches with AI until something works. I'm not thinking less, just differently. -There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. I don't care at all. Is that bad? I'm not sure. I just want things to work. One concept I taught at the university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true because code was written by humans for humans, and much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. But if AI writes the code, *who cares*? If the app works, that's *what matters*. I'm not saying code quality isn't important, but the mindset is changing. AI generates working code, and new models will generate even better code. So why spend time improving code that'll be replaced in a few years with the effort of one prompt? The mindset is shifting across most organizations. As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. And if you know what you are doing, you can improve the product far away than before, even your code or the code written with the help of the AI. If not, the good news is learning new things is easier than ever. +There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my focus the last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. I don't care at all. Is that bad? I'm not sure. I just want things to work. + +One concept I taught at university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true because code was written by humans for humans, and much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. But if AI writes the code, *who cares*? If the app works, that's *what matters*. I'm not saying code quality isn't important, but the mindset is changing. AI generates working code, and new models will generate even better code. So why spend time improving code that'll be replaced in a few years with the effort of one prompt? The mindset is shifting across most organizations. At least for my personal projects I've lowered the barrel about quality, as soon as code works and do whatever I want, I'm fine with it. + +On the professional side thins are different. Code quality matters as well as other factors. Using AI helped me to deliver features faster and I can do in minutes what took hours before. But when AI isn't working well is a painful, as +you can keep iterating with promtps and never get a good result. At the end you have spent more or less same time than coding by yourself. The other thing as I notice whith this is the loss of "perception of the tracking the progress". If I write code by myseflf I know what and where I'm doing, starting with style and the way of doing things. With AI I have all the files modified at once and I lost the feeling of doing the things bit a bit. Sometimes doing a small changes, add a prompt focused on some part or just rewrite from scratch. That depends on the complexity but the "mind effort" is different than coding by myself. + +As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. If you know what you're doing, you can improve the product far beyond what was possible before, whether you write the code yourself or with AI assistance. The good news is, learning new things is easier than ever. Another related point is proper code quality. Coding is hard; writing good code is harder. Code isn't art or inherently beautiful (though it can be ugly). Code is a tool for solving problems. It needs to be readable, understandable, and maintainable. AI isn't perfect yet and sometimes generates suboptimal, insecure, or incorrect code. We need to review AI-generated code, test it thoroughly, and ensure it meets quality standards. AI can't do this yet. However, not all code *needs to be perfect*. If you're building a startup and want to ship fast, iterate, and experiment, you can now accomplish in days what took months before. You can build an MVP in days instead of weeks. That's a game changer for startups. ## Is John Connor ready to play? -But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. +But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. My home server will involve several people in the past. Any landing page or corportate page can be *done* by AI; imagine how many people you don't need here (designers, frontends, backends). I'm not saying that all jobs will disappear but for certain tasks you can that yourself instead of hiring/contracting someone. -Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact coming years, as we'll lose a generation of fresh thinkers and people needed to -*maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, etc. Same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*. +Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact the coming years, as we risk losing a generation of fresh thinkers and people needed to +*maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, etc. Same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*—meaning we're facing a serious challenge. My job isn't just coding anymore. Throughout my career, I've had to learn new languages, frameworks, platforms, architectures, [devops](https://es.slideshare.net/slideshow/devops-cult-what/128327583), and tools—now including AI. Not using AI as a software engineer today is like using horses for transportation instead of a Formula 1 car. I'm not saying we'll stop coding or developing software, but how we do it is changing—fast. @@ -83,9 +90,7 @@ But with great power comes great responsibility, and AI brings serious ethical c Then there's the question of consent. AI models were trained on massive datasets scraped from the internet—books, articles, artwork, code—often without permission or compensation to creators. Artists discover their styles replicated, writers find their prose mimicked, and photographers see their images used to train systems that could replace them. It's a Wild West of intellectual property rights, and the legal frameworks haven't caught up. -Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. And the main problem here is that each time is harder to detect fakes, as AI improves. - -As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. +Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. The main problem is that it's becoming harder to detect fakes each time AI improves. As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. And let's not forget the environmental cost. Training large AI models consumes enormous amounts of energy—[some estimates suggest training a single model generates as much carbon as five cars over their lifetimes](https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/). As AI usage scales, so does its carbon footprint. Data centers running inference queries 24/7 require massive electricity and cooling. We're solving problems faster, but at what environmental cost? From b7c7cd0eaaf7a4be3be6f2113c646d621a6a9d97 Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 12:54:13 +0100 Subject: [PATCH 06/14] Drop redundant paragraph --- _posts/2026-01-31-ai_wonderland.markdown | 29 ++++++++++++------------ 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index c443620..153e418 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -62,25 +62,27 @@ Once I felt confident with AI coding, I decided to build my own home network and At work, we started using AI. GPT models worked well for test generation and seed data, though achieving good code generation was harder. Claude models proved better for code generation. If you want to measure AI usage, check this post about [real-time employee AI usage in Worklytics](https://www.worklytics.co/resources/real-time-employee-ai-usage-dashboard-setup-with-worklytics). -But let me be honest here. My coding skills have declined somewhat. Why? I still code extensively. However, before AI, I spent more time thinking and digging through Stack Overflow comments and posts until finding a suitable solution. Now, I often ask AI for a solution, and if the solution looks good, I can use it. If not, I refine it or try a different approach. I remember being stuck on problems for hours before finding a solution. Now I can try multiple approaches with AI until something works. I'm not thinking less, just differently. +But let me be honest here. **My coding skills have declined somewhat**. Why? I still code extensively. However, before AI, I spent more time thinking and digging through Stack Overflow comments and posts until finding a suitable solution; then I was spending hours to write the code and thinking about each line I was writing. Now, I often ask AI for a solution, and if the solution looks good, I can use it. If not, I refine it or try a different approach. I remember being stuck on problems for hours before finding a solution. Now I can try multiple approaches with AI until something works. I'm not thinking less, just differently. -There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my focus the last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. I don't care at all. Is that bad? I'm not sure. I just want things to work. +There's also the question of code quality and programming languages. For personal projects or my GitHub repositories, I don't worry about the language used. I spent 15 years coding in C#; Java was my focus the last five years. My home server's frontend is built in JavaScript, the backend in Python. The [distributed app design tutorial](https://khnumdev.github.io/dist-app-tutorial/) is written in Node.js. The language doesn't matter—what matters is choosing the right tool for the problem. As soon as the code works and does what I want, I'm fine with it. -One concept I taught at university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true because code was written by humans for humans, and much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. But if AI writes the code, *who cares*? If the app works, that's *what matters*. I'm not saying code quality isn't important, but the mindset is changing. AI generates working code, and new models will generate even better code. So why spend time improving code that'll be replaced in a few years with the effort of one prompt? The mindset is shifting across most organizations. At least for my personal projects I've lowered the barrel about quality, as soon as code works and do whatever I want, I'm fine with it. +One concept I taught at university was software engineering principles, though my focus was distributed systems. I emphasized core software principles for two main reasons: first, "doing the right things" (which I have on my CV), and second, "code needs to be maintained, understood, and improved." That's true when humans write and maintain the code. Much of our work as software engineers is "cleaning house"—improving existing code so the next person faces fewer problems. -On the professional side thins are different. Code quality matters as well as other factors. Using AI helped me to deliver features faster and I can do in minutes what took hours before. But when AI isn't working well is a painful, as -you can keep iterating with promtps and never get a good result. At the end you have spent more or less same time than coding by yourself. The other thing as I notice whith this is the loss of "perception of the tracking the progress". If I write code by myseflf I know what and where I'm doing, starting with style and the way of doing things. With AI I have all the files modified at once and I lost the feeling of doing the things bit a bit. Sometimes doing a small changes, add a prompt focused on some part or just rewrite from scratch. That depends on the complexity but the "mind effort" is different than coding by myself. +But here's what's changing: if AI generates code and AI can also maintain it, the definition of quality shifts. It's not that quality doesn't matter anymore—it's that we're optimizing for different things. For personal projects, I prioritize speed and functionality. The code works, solves the problem, and if it needs to change, I can ask AI to regenerate or improve it. For professional work, the calculus is different. Code still needs to be secure, performant, and correct. But the obsession with human-readable, perfectly formatted code matters less when AI handles maintenance. -As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. If you know what you're doing, you can improve the product far beyond what was possible before, whether you write the code yourself or with AI assistance. The good news is, learning new things is easier than ever. +I'm not saying we should abandon standards. I'm saying that teams are now asking: "Does it work? Can we iterate on it quickly?" instead of "Is every line beautifully crafted for the next developer?" Not all code needs to be perfect. If you're building a startup, shipping fast matters more than pristine architecture. Why spend weeks refactoring for maintainability when AI can regenerate the codebase in minutes? The traditional justification for design patterns was that humans would maintain code for years. When AI handles that, patterns become less critical. -Another related point is proper code quality. Coding is hard; writing good code is harder. Code isn't art or inherently beautiful (though it can be ugly). Code is a tool for solving problems. It needs to be readable, understandable, and maintainable. AI isn't perfect yet and sometimes generates suboptimal, insecure, or incorrect code. We need to review AI-generated code, test it thoroughly, and ensure it meets quality standards. AI can't do this yet. However, not all code *needs to be perfect*. If you're building a startup and want to ship fast, iterate, and experiment, you can now accomplish in days what took months before. You can build an MVP in days instead of weeks. That's a game changer for startups. +On the professional side, things are different. Code quality still matters, and AI helps me deliver features faster. What took hours before now takes minutes. But when AI underperforms, it's painful. You keep refining prompts, iterating endlessly, and eventually realize you've spent as much time as if you'd coded it yourself. + +There's another aspect I've noticed: the loss of "sense of tracking progress." When I code manually, I know exactly what I'm doing—the style, the approach, the incremental steps. With AI, all files change at once. Sometimes it's a small refinement, sometimes a complete rewrite. The progression feels invisible, and the "mind effort" feels different than hands-on coding. + +As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. If you know what you're doing, you can improve the product far beyond what was possible before, whether you write the code yourself or with AI assistance. The good news is, learning new things is easier than ever. As a reminder, for AI too as well. ## Is John Connor ready to play? -But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. My home server will involve several people in the past. Any landing page or corportate page can be *done* by AI; imagine how many people you don't need here (designers, frontends, backends). I'm not saying that all jobs will disappear but for certain tasks you can that yourself instead of hiring/contracting someone. +But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. My home server project would have involved several people in the past. Any landing page or corporate page can be done by AI; imagine how many people you don't need for that work (designers, frontend developers, backend developers). I'm not saying that all jobs will disappear, but for certain tasks, you can do that yourself instead of hiring or contracting someone. -Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact the coming years, as we risk losing a generation of fresh thinkers and people needed to -*maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, etc. Same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*—meaning we're facing a serious challenge. +Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact the coming years, as we risk losing a generation of fresh thinkers and people needed to *maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, and so on. The same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*. My job isn't just coding anymore. Throughout my career, I've had to learn new languages, frameworks, platforms, architectures, [devops](https://es.slideshare.net/slideshow/devops-cult-what/128327583), and tools—now including AI. Not using AI as a software engineer today is like using horses for transportation instead of a Formula 1 car. I'm not saying we'll stop coding or developing software, but how we do it is changing—fast. @@ -90,7 +92,7 @@ But with great power comes great responsibility, and AI brings serious ethical c Then there's the question of consent. AI models were trained on massive datasets scraped from the internet—books, articles, artwork, code—often without permission or compensation to creators. Artists discover their styles replicated, writers find their prose mimicked, and photographers see their images used to train systems that could replace them. It's a Wild West of intellectual property rights, and the legal frameworks haven't caught up. -Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. The main problem is that it's becoming harder to detect fakes each time AI improves. As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. +Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. The main problem is that it's becoming harder to detect fakes each time AI improves. As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. But what happens when AI is training using these content? It's a vicious cycle. And let's not forget the environmental cost. Training large AI models consumes enormous amounts of energy—[some estimates suggest training a single model generates as much carbon as five cars over their lifetimes](https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/). As AI usage scales, so does its carbon footprint. Data centers running inference queries 24/7 require massive electricity and cooling. We're solving problems faster, but at what environmental cost? @@ -98,10 +100,9 @@ And let's not forget the environmental cost. Training large AI models consumes e AI feels like a true third revolution—something that will fundamentally change how we live, work, and interact. It's like having all content available with just a prompt—a superpower if used correctly. It feels like ages since I didn't use AI daily, but it was just months ago. Things are moving rapidly: new models, startups, and companies appear constantly. Every week there's something new about AI, and it's hard to stay updated. -I also suspect we're in a bubble. Eventually, funding will dry up and some AI companies will disappear, just like the dotcom bubble. [History will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/), and most AI companies aren't profitable or lack sustainable models—much like [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies). But that doesn't matter. AI will prevail, and there will be two types of users: those who use AI and those who don't. Like then, many people *didn't understand what the internet is*; like in the 1970s-80s, many people *didn't want to use computers because they were too complicated*. Now we can't imagine an architect without AutoCAD, a doctor without access to online medical databases, or a finance department without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again: barely 15 years ago, video calls from mobile phones weren't possible. +I also suspect we're in a bubble. Eventually, funding will dry up and some AI companies will disappear, just like the dotcom bubble. [History will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/), and most AI companies aren't profitable or lack sustainable models—much like [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies) will cause issues in the system. But that doesn't matter. AI will prevail, and there will be two types of users: those who use AI and those who don't. Like then, many people *didn't understand what the internet is*; like in the 1970s-80s, many people *didn't want to use computers because they were too complicated*. Now we can't imagine an architect without AutoCAD, a doctor without access to online medical databases, or a finance department without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again: barely 15 years ago, video calls from mobile phones weren't possible. -Every day I read cases where people ask AI about medical issues and it usually gives good responses, or [how AI helps in protein research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kinds of new things that can be done are almost -impossible to imagine now. Just think about the possibilities in the next 5-10 years. Personally, I expected quantum computing to be the next big thing for problems that seemed unsolvable, but AI is here now and impacting our lives. +Every day I read cases where people ask AI about medical issues and it usually gives good responses, or [how AI helps in protein research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kinds of new things that are now possible are almost impossible to imagine. Just think about the possibilities in the next 5-10 years. Personally, I expected quantum computing to be the next big thing for problems that seemed unsolvable, but AI is here now and impacting our lives. For that reason, I believe AI will become normal, just like home internet is now. I won't speak in future tense: AI is changing labor and how we consume information. What about the effects? Unknown yet. But I can be comfortable knowing my name isn't John Connor. From 61b2533e0ef835c297e620670b21d575edcaa219 Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 12:58:59 +0100 Subject: [PATCH 07/14] Grammar and some typos --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 153e418..517991f 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -80,7 +80,7 @@ As software engineers, we think our code is the *end goal*, but it's not. Someti ## Is John Connor ready to play? -But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. My home server project would have involved several people in the past. Any landing page or corporate page can be done by AI; imagine how many people you don't need for that work (designers, frontend developers, backend developers). I'm not saying that all jobs will disappear, but for certain tasks, you can do that yourself instead of hiring or contracting someone. +But the question is: will AI take my *current* job? Probably. It's a matter of time—whether 2 years or 10 years, it will happen. My home server project would have involved several people in the past. Any landing page or corporate page can be done by AI; imagine how many fewer people you'd need (designers, frontend developers, backend developers). I'm not saying that all jobs will disappear, but for certain tasks, you can now do them yourself instead of hiring or contracting someone. Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact the coming years, as we risk losing a generation of fresh thinkers and people needed to *maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, and so on. The same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*. From 88c17b0d8963615d4c2196fb246f39f38553ccaa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andr=C3=A9s=20P=C3=A9rez?= Date: Sun, 1 Feb 2026 13:04:48 +0100 Subject: [PATCH 08/14] Update _posts/2026-01-31-ai_wonderland.markdown Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 517991f..6f18177 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -100,7 +100,7 @@ And let's not forget the environmental cost. Training large AI models consumes e AI feels like a true third revolution—something that will fundamentally change how we live, work, and interact. It's like having all content available with just a prompt—a superpower if used correctly. It feels like ages since I didn't use AI daily, but it was just months ago. Things are moving rapidly: new models, startups, and companies appear constantly. Every week there's something new about AI, and it's hard to stay updated. -I also suspect we're in a bubble. Eventually, funding will dry up and some AI companies will disappear, just like the dotcom bubble. [History will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/), and most AI companies aren't profitable or lack sustainable models—much like [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies) will cause issues in the system. But that doesn't matter. AI will prevail, and there will be two types of users: those who use AI and those who don't. Like then, many people *didn't understand what the internet is*; like in the 1970s-80s, many people *didn't want to use computers because they were too complicated*. Now we can't imagine an architect without AutoCAD, a doctor without access to online medical databases, or a finance department without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again: barely 15 years ago, video calls from mobile phones weren't possible. +I also suspect we're in a bubble. Eventually, funding will dry up and some AI companies will disappear, just like the dotcom bubble. [History will repeat](https://jasonzweig.com/lessons-and-ideas-from-benjamin-graham-2/), and most AI companies aren't profitable or lack sustainable models—much like [Lucent Technologies](https://en.wikipedia.org/wiki/Lucent_Technologies) will cause issues in the system. But that doesn't matter. AI will prevail, and there will be two types of users: those who use AI and those who don't. Like during the early internet era, many people *didn't understand what the internet is*; like in the 1970s-80s, many people *didn't want to use computers because they were too complicated*. Now we can't imagine an architect without AutoCAD, a doctor without access to online medical databases, or a finance department without Excel. I want to emphasize my post [Some thoughts about technology](/2023/12/30/tech_thoughts/) again: barely 15 years ago, video calls from mobile phones weren't possible. Every day I read cases where people ask AI about medical issues and it usually gives good responses, or [how AI helps in protein research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kinds of new things that are now possible are almost impossible to imagine. Just think about the possibilities in the next 5-10 years. Personally, I expected quantum computing to be the next big thing for problems that seemed unsolvable, but AI is here now and impacting our lives. From 75a7003cc611679dc5f5494636a5be117f400a51 Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 13:05:36 +0100 Subject: [PATCH 09/14] Fix typo --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 6f18177..27f2783 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -58,7 +58,7 @@ As you can imagine, I barely use Stack Overflow anymore, and I'm not alone—[AI As I mentioned, I tried to develop an app two years ago. It's hard to believe how much AI models have improved in just two years. My first project was preparing a tutorial for distributed systems lessons I taught [previously](https://x.com/andresperezgil/status/1637552267539644417?s=46), with the goal of improving the content and migrating the tutorial to Docker. Something that would have taken me months to do on weekends was completed in a couple of days. You can see the tutorial [here](https://khnumdev.github.io/dist-app-tutorial/), and students enjoyed it greatly over the past year. -Once I felt confident with AI coding, I decided to build my own home network and server. This could be a separate post with technical details (if you're interested, just ask), but I have a fully segmented network with traffic on multiple VLANs, VPN, firewalls, NTS, and a server hosting services in Docker. The initial idea was to add a home camera without exposing its traffic. The project began when my TV broke and I had to replace it. The new TV connects to the internet and sends lots of data about me, and I increasingly care about privacy. With that in mind, I started with hardware and software. This project would have taken nearly 2 years but was completed in less than 2 months from scratch. All code, configuration, and setup scripts were generated by AI under my supervision. I learned a lot about networking, servers, Docker, security, and related topics. I'm very satisfied with the results. +Once I felt confident with AI coding, I decided to build my own home network and server. This could be a separate post with technical details (if you're interested, just ask), but I have a fully segmented network with traffic on multiple VLANs, VPN, firewalls, NTP, and a server hosting services in Docker. The initial idea was to add a home camera without exposing its traffic. The project began when my TV broke and I had to replace it. The new TV connects to the internet and sends lots of data about me, and I increasingly care about privacy. With that in mind, I started with hardware and software. This project would have taken nearly 2 years but was completed in less than 2 months from scratch. All code, configuration, and setup scripts were generated by AI under my supervision. I learned a lot about networking, servers, Docker, security, and related topics. I'm very satisfied with the results. At work, we started using AI. GPT models worked well for test generation and seed data, though achieving good code generation was harder. Claude models proved better for code generation. If you want to measure AI usage, check this post about [real-time employee AI usage in Worklytics](https://www.worklytics.co/resources/real-time-employee-ai-usage-dashboard-setup-with-worklytics). From dbd8e722174dc0df8c7d9b266388f61647b615c4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andr=C3=A9s=20P=C3=A9rez?= Date: Sun, 1 Feb 2026 13:06:02 +0100 Subject: [PATCH 10/14] Update _posts/2026-01-31-ai_wonderland.markdown Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 6f18177..d659ce0 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -92,7 +92,7 @@ But with great power comes great responsibility, and AI brings serious ethical c Then there's the question of consent. AI models were trained on massive datasets scraped from the internet—books, articles, artwork, code—often without permission or compensation to creators. Artists discover their styles replicated, writers find their prose mimicked, and photographers see their images used to train systems that could replace them. It's a Wild West of intellectual property rights, and the legal frameworks haven't caught up. -Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. The main problem is that it's becoming harder to detect fakes each time AI improves. As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. But what happens when AI is training using these content? It's a vicious cycle. +Deepfakes represent another danger. We've moved beyond silly Will Smith videos to convincing fake political speeches, non-consensual intimate imagery, and sophisticated scams. The erosion of trust in media is accelerating—we're reaching a point where seeing is no longer believing. Democracy itself faces threats when you can't distinguish real from fabricated. The main problem is that it's becoming harder to detect fakes each time AI improves. As AI can simulate voices, scams are more present than ever. Each security breach in a company means that a lot of bots can be trained with real human voices, used to impersonate employees or even relatives of the victim. Imagine receiving a call from your boss asking for sensitive information, or from a family member in distress requesting money. The emotional manipulation is powerful, and AI makes these scams more convincing than ever. But what happens when AI is trained using this content? It's a vicious cycle. And let's not forget the environmental cost. Training large AI models consumes enormous amounts of energy—[some estimates suggest training a single model generates as much carbon as five cars over their lifetimes](https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/). As AI usage scales, so does its carbon footprint. Data centers running inference queries 24/7 require massive electricity and cooling. We're solving problems faster, but at what environmental cost? From 90ca2e40f637c587e5dd0686ab24efdfac87082e Mon Sep 17 00:00:00 2001 From: andres Date: Sun, 1 Feb 2026 13:06:49 +0100 Subject: [PATCH 11/14] drop redundant word --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 27f2783..cbc0135 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -76,7 +76,7 @@ On the professional side, things are different. Code quality still matters, and There's another aspect I've noticed: the loss of "sense of tracking progress." When I code manually, I know exactly what I'm doing—the style, the approach, the incremental steps. With AI, all files change at once. Sometimes it's a small refinement, sometimes a complete rewrite. The progression feels invisible, and the "mind effort" feels different than hands-on coding. -As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. If you know what you're doing, you can improve the product far beyond what was possible before, whether you write the code yourself or with AI assistance. The good news is, learning new things is easier than ever. As a reminder, for AI too as well. +As software engineers, we think our code is the *end goal*, but it's not. Sometimes we forget that code is a tool for solving problems. If AI can do that better, why fight it? But surprisingly, this is where software engineers will have more value: in system design, architecture, and decision-making. AI can generate code, but it can't decide what to build, how to build it, or why to build it. That's our job. Without understanding the basics, you're lost and can't evaluate whether AI results are good or bad. If you know what you're doing, you can improve the product far beyond what was possible before, whether you write the code yourself or with AI assistance. The good news is, learning new things is easier than ever. As a reminder, for AI as well. ## Is John Connor ready to play? From 613b398b3b49c52af76694bb56d9d717833861b0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andr=C3=A9s=20P=C3=A9rez?= Date: Sun, 1 Feb 2026 13:07:19 +0100 Subject: [PATCH 12/14] Update _posts/2026-01-31-ai_wonderland.markdown Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index d659ce0..a2f1440 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -84,7 +84,7 @@ But the question is: will AI take my *current* job? Probably. It's a matter of t Recent studies show that [junior worker hiring is shrinking](https://observer.com/2025/09/ai-shrinking-job-market-junior-workers-harvard-study/). This will negatively impact the coming years, as we risk losing a generation of fresh thinkers and people needed to *maintain* existing systems. Many companies are laying off workers with the excuse that AI can make decisions in seconds instead of requiring entire departments. A clear example is lawyers: you can consult a lawyer or input your case into AI to get a report with possible outcomes, similar cases, and so on. The same applies to accountants, financial advisors, and marketing experts. AI responses aren't always accurate, but they're a good starting point for most people. AI will improve further in coming years. As I read recently, *we are cooked*. -My job isn't just coding anymore. Throughout my career, I've had to learn new languages, frameworks, platforms, architectures, [devops](https://es.slideshare.net/slideshow/devops-cult-what/128327583), and tools—now including AI. Not using AI as a software engineer today is like using horses for transportation instead of a Formula 1 car. I'm not saying we'll stop coding or developing software, but how we do it is changing—fast. +My job isn't just coding anymore. Throughout my career, I've had to learn new languages, frameworks, platforms, architectures, [DevOps](https://es.slideshare.net/slideshow/devops-cult-what/128327583), and tools—now including AI. Not using AI as a software engineer today is like using horses for transportation instead of a Formula 1 car. I'm not saying we'll stop coding or developing software, but how we do it is changing—fast. ## "Curiouser and curiouser" From 7166bdc59701750ea3ea05388d6e8f7084735e24 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andr=C3=A9s=20P=C3=A9rez?= Date: Sun, 1 Feb 2026 13:07:40 +0100 Subject: [PATCH 13/14] Update _posts/2026-01-31-ai_wonderland.markdown Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index a2f1440..775a10c 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -34,7 +34,7 @@ How did we get here? If my mother asked me, I would say: - We developed deep learning techniques in the 2010s, achieving breakthroughs in image and speech recognition. - We created large-scale datasets for training AI models. More data led to better model performance. - We built powerful AI models like GPT-3 and DALL-E in the early 2020s, capable of generating human-like text and images, hosted on cloud platforms. *Unlimited* power for researchers and companies. -- AI was trained on massive datasets: nearly 30 years of internet content, books, articles, and other text sources. Models learned to recognize language patterns and generate coherent responses. And all of these content is available using natural language. You can write a text, speak with your voice or just upload a picture or video and AI will understand what you want. +- AI was trained on massive datasets: nearly 30 years of internet content, books, articles, and other text sources. Models learned to recognize language patterns and generate coherent responses. And all of this content is available using natural language. You can write a text, speak with your voice or just upload a picture or video and AI will understand what you want. - For the future, most people expect to reach [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) at some point, with companies investing heavily in being first, much like the space race. And here we are in 2026, living in the AI Wonderland. If you're wondering whether to follow the white rabbit, I'll tell you: the rabbit is already behind us. From 5bd35bfb3592dd96c88c1e4e5fa36537cc6c26e2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andr=C3=A9s=20P=C3=A9rez?= Date: Sun, 1 Feb 2026 13:11:55 +0100 Subject: [PATCH 14/14] Apply suggestion from @Copilot Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- _posts/2026-01-31-ai_wonderland.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-01-31-ai_wonderland.markdown b/_posts/2026-01-31-ai_wonderland.markdown index 3eb3a7f..0053dc7 100644 --- a/_posts/2026-01-31-ai_wonderland.markdown +++ b/_posts/2026-01-31-ai_wonderland.markdown @@ -104,7 +104,7 @@ I also suspect we're in a bubble. Eventually, funding will dry up and some AI co Every day I read cases where people ask AI about medical issues and it usually gives good responses, or [how AI helps in protein research](https://www.science.org/content/article/ai-revolution-comes-protein-sequencing). The kinds of new things that are now possible are almost impossible to imagine. Just think about the possibilities in the next 5-10 years. Personally, I expected quantum computing to be the next big thing for problems that seemed unsolvable, but AI is here now and impacting our lives. -For that reason, I believe AI will become normal, just like home internet is now. I won't speak in future tense: AI is changing labor and how we consume information. What about the effects? Unknown yet. But I can be comfortable knowing my name isn't John Connor. +For that reason, I believe AI will become normal, just like home internet is now. AI is changing labor and how we consume information. What about the effects? Unknown yet. But I can be comfortable knowing my name isn't John Connor.