Categoría: Opinión

  • Telecom services paralyzed after Nationwide Blackout

    On April 28th, a general blackout occurred across all of Spain, and by extension, also in Portugal due to the proximity and electrical interconnection between the two countries. At 12:30 PM, a widespread power outage affected every device connected to the power grid, and for a few minutes, confusion and anxiety prevailed. During those minutes, all fiber optic connections began to shut down, which meant that a lucky few with Uninterruptible Power Supplies (commonly known as UPS) were still able to access some news websites and realize the outage was larger than it initially appeared.

    Unfortunately, UPS systems only last a few minutes—just enough to withstand an electrical fluctuation or to safely shut down servers if the outage lasts too long—so we were left with only mobile networks, the saviors during catastrophes, with batteries meant to last 24 hours without electricity, or at least, that’s how it should have been.

    Unfortunately, many antennas also lost electrical connection in most cases; in others, although there was coverage (the antennas were powered), there was no internet or phone traffic. In other cases, the antennas lasted even less than a household UPS, and all signal disappeared in barely two minutes. Many antennas couldn’t withstand even one hour of power outage, leaving thousands of people completely out of communication.

    As the ad once said, “the future is mobile,” and anything that isn’t a mobile network hardly seems worth maintaining. The copper network has been removed and replaced by a fiber network that requires continuous power. In these cases, UPS units last only a few minutes to prevent brief electrical interruptions. But when Europe advises us to prepare an “emergency kit” designed to sustain us for at least 48 hours without leaving home, they should also recommend that communications be able to last more than just a few minutes.

    I confess I’m not a “prepper” at all—if disaster strikes, I’ll surely be caught without toilet paper, without drinking water, and with an empty fridge. Nevertheless, I’m aware that we should be at least minimally prepared and store a few cans of food “just in case.”

    We are so dependent on electricity and communications that, if either is missing, people would go crazy—and if communications also depend on electricity, it makes us even more vulnerable, as has been demonstrated.

    Luckily, there weren’t many tragedies to mourn. Some people had difficulties with respirators, oxygen machines, or CPAP devices for sleep apnea, but in these cases, the inability to communicate greatly increases the sense of vulnerability and lack of response. (I won’t even go into political incompetence and the lack of useful, swift responses to a disaster of this magnitude.)

    Data centers are required to have backup systems that allow them to operate for many hours in case of a power outage. These backup systems use generators, solar panels, etc., and thanks to them, many systems continued functioning once power was restored. However, Movistar (and by extension, all mobile networks that rely on it) continued to experience issues with mobile service 48 hours after electricity returned. Is the mobile network really so unresilient that it still couldn’t support phone calls 48 hours after power was restored?

    We must learn something from all this: in this case, that the electrical grid and power systems must be more robust. We need to be “more independent,” both at the national level and on a personal level. Having backup batteries at home to power LED lamps, charge mobile phones, a battery-powered radio to hear the news, a walkie-talkie to communicate without relying on a company that might let you down at such times, or an alternative way to cook besides the typical electric stove/microwave/oven—these are all things that should be part of the “emergency kit” we now need to take seriously.

    As I said, we must learn something from every situation—and in this case, I’m going to take that “emergency kit” that Brussels recently announced (and half the country, myself included, laughed off) much more seriously.

  • DeepSeek Sparks the War of AI Models

    DeepSeek Sparks the War of AI Models

    Just a few days after the new President of the United States announced he would allocate 5 billion dollars to the main American companies dedicated to AI (OpenAI, Microsoft, Google, Meta, X, etc.), a Chinese company went ahead and published an AI model called DeepSeek-R1 under a free license. To be specific, many models are “open source,” which means their source code is open, but in practice, they are under very restrictive licenses (in Europe, for example, certain uses are prohibited…). In contrast, the DeepSeek-R1 license is truly a FREE model under the MIT license, one of the most widely used licenses by free software enthusiasts.

    However, what has sparked a war of AI models is the fact that DeepSeek appears to be much more optimized and not only runs faster than ChatGPT, but also requires less computing hardware to function, meaning practically anyone could have their own complex LLM system with reasoning capabilities using just a simple graphics card costing no more than €200.

    Of course, DeepSeek offers an online version that people can use completely free of charge. It also has a (commercial) API to use the reasoning version (similar to the GPT4-o1 model), though at a slightly lower cost than OpenAI’s API.

    It seems that the United States wants to start its own “AI space race” against China, but the latter has just launched its first rocket to the Moon, leaving the U.S. with a 400 billion-dollar hole in the stock of the company that manufactures the most-used hardware for AI: NVIDIA.


    In the United States:

    GPT-4 by OpenAI: GPT-4 is a large-scale language model that has set new standards in natural language processing, text generation, and contextual understanding. Its ability to generate coherent and relevant content has been widely recognized.

    Gemini 2.0 by Google DeepMind: Gemini 2.0 is a multimodal model that integrates text, image, and audio processing. It stands out for its advanced reasoning and understanding capabilities, enabling applications in various areas such as information retrieval and virtual assistance.

    Claude by Anthropic: Claude is a language model focused on safety and alignment with human values. It has been used in applications requiring secure and ethical interactions, contributing to responsible AI development.

    Grok-2 by xAI: From Elon Musk’s company, Grok-2 is a language model that integrates with platforms like X (formerly Twitter) to enhance interaction and content generation. Its focus on social media integration distinguishes it in the AI landscape.

    Llama 3 by Meta AI: Introduced in April 2024, Llama 3 is the latest version in Meta’s series of large language models. Available in 8 billion and 70 billion parameter versions, it outperforms other open-source models in various benchmark tests.


    In China:

    DeepSeek-R1: Developed by the startup DeepSeek, this reasoning model has surprised the tech sector with its efficiency and low cost. DeepSeek-R1 has surpassed competitors like OpenAI in terms of downloaded applications, demonstrating that advanced AI development does not require huge investments.

    Qwen 2.5-Max by Alibaba: The tech giant Alibaba recently launched its Qwen 2.5-Max model, which claims to outperform advanced models like DeepSeek-V3, OpenAI’s GPT-4o, and Meta’s Llama-3.1-405B. This release highlights Alibaba’s rapid evolution in the AI field.

    Ernie Bot by Baidu: Baidu, known for its search engine, has developed Ernie Bot, a conversational AI model that has been integrated into various applications, demonstrating advanced natural language processing capabilities.

    Hunyuan by Tencent: Tencent, the company behind WeChat, created the AI model Hunyuan, which has been integrated into its messaging platform to improve interactions and provide more accurate responses to users.

    Kimi k1.5 by Moonshot AI: Moonshot AI has developed the Kimi k1.5 model, known for its multimodal and reasoning capabilities, positioning itself as a competitive alternative in the AI market.


    The Importance of AI in Europe…

    Meanwhile, in Europe, we are watching how various players are striving to lead in AI, waiting in astonishment to see who will achieve supremacy in a technology that, as anyone can imagine, will be as revolutionary as the internet was in its day.

    The European Parliament has already worked on creating specific legislation for AI in Europe, classifying models based on their “danger level,” establishing a “testing space” to study whether algorithms are dangerous or not, and similar measures. The legislation itself is not bad; it merely seeks to ensure that AI is safe, ethical, and respectful of fundamental rights. However, some companies consider that the obligations imposed are too demanding and could hinder innovation. To express their opposition to this “control,” they opt to ban the use of their technology in Europe, thereby pressuring authorities to relax its use on the old continent.

    Whatever the case may be, Europe is currently lagging behind in this technological race for AI leadership. It’s nothing new; the same thing happens in other fields such as robotics and drones, for instance. While the United States and China are making real strides in terms of drones—not only with spectacular shows involving hundreds of units but also for transporting materials and/or people—in Europe, we have enacted such restrictive legislation that if you see someone flying a drone today, the police are probably already on their way to issue a fine.

  • Artificial Intelligence is a bubble that will burst differently from others

    Artificial Intelligence is a bubble that will burst differently from others

    Since I was young, I’ve had family conversations about society’s «utopian» advances… about what the future would hold: computers that fit in the palm of your hand back in 1986, robots that walked on two legs and maintained balance in 1989. Even AI has been advancing step by step for almost 40 years. AI had only two major problems: the objective (very basic tools to obtain very basic data) and the cost (too high for the results obtained). Therefore, AI has always been present but relegated to a mere «utopian advance» that we could only glimpse in science fiction movies.

    Today, AI has made a quantitative and qualitative leap, allowing not only the development of complex tools and achieving complex results like summarizing the entire book of Don Quixote so that a 12-year-old can understand it, but its cost has also been democratized. This means that hardware with computing power similar to what’s used for playing 3D games can offer very interesting solutions without having to spend hundreds of millions of dollars on ultra-complex processing systems.

    Currently, I follow AI daily for various reasons. One is because I’m passionate about it (the truth is, I’m passionate about any topic that helps us advance as a society, and AI is one of these topics). On the other hand, it’s clear that AI is a technology that will be incorporated into any new system we have—from a washing machine, a refrigerator, or a microwave to the house itself, allowing us to talk to it and automate it as no one could have ever imagined.

    Today, it’s not difficult to find hundreds of «free» or «semi-free» tools that offer wonderful advantages based on artificial intelligence. They allow us to make videos, modify photographs, create songs, compose music, write texts, help us summarize, and design outlines to quickly and easily learn any topic we want.


    AI Has a Cost

    However, these tools have a cost—an intrinsic cost based on the computing power we demand. From the moment we send an audio recording and ask it to transcribe it into text so we can «read» what was said in the conversation, we realize there’s a card consuming almost 800W/h, generating 20ºC that needs cooling to prevent it from burning out, in addition to the acquisition cost (of the card, the equipment, air conditioning, etc.). If we multiply that by the number of simultaneous requests, the cost skyrockets. So how can companies like OpenAI, Google, or Amazon offer it at a fairly affordable price?

    Hence the idea that AI is a bubble where costs are absorbed because it’s in their interest to publicize this technology, get people accustomed to it, have people use it, and see how «cheap» it is with these «low» rates that don’t cover the costs. That’s why OpenAI is the company that has publicly introduced AI, and everyone knows ChatGPT and can afford to pay €20/month for a cheap system that works wonders… but few would pay what that system really costs, and that’s why OpenAI is debating between bankruptcy or redoing its entire commercial system to cover costs.


    Charge as Much as Possible, but with Low Prices…

    Charging for AI is not simple at all. Just look at the price list, which mentions a price per «tokens» (the smallest unit into which a word or phrase can be divided. It can be a complete word, a punctuation mark, a subword like half of a compound word, or even a special character), per minute of audio to convert. The price of images is also quite curious because it depends on the number of iterations, the resolution, the type of generation, and a large number of parameters. It’s practically impossible to forecast costs for a given number of interactions.

    If I create a system that transcribes and answers calls using artificial intelligence, what costs will I have? It will all depend on the words that need to be transcribed, those that have to be used as input, those that have to be used for output, those that have to be converted back to audio… everything depends on many factors. And what if someone asks for something that triggers a high number of response words? Bad news… get your wallet ready.

    In the end, these prices are so strange to charge as much as possible (while appearing to charge the minimum) to cover costs. But even so, the cost is very high, and there will come a time when companies working with AI truly need to cover all costs and start raising prices on everything related to GPT, image generation, etc. Then we will see how some companies that have grown thanks to the low costs of those who did their calculations at low prices will be left without users because no one will be willing to pay what AI really costs.


    When Will the Bubble Burst?

    I don’t think anyone knows—basically, when the funds that lent the money to invest in AI infrastructure start demanding returns. Right now, we’re in a stage that in the Silicon Valley tech world we call: «creating the need.» Well, who considers ChatGPT a necessary tool? Who wouldn’t miss that tool? What designer or photographer doesn’t use Adobe’s AI to improve or modify their photographs? Today, I believe that the stage of «creating the need» has already been completed, so I suppose the cost increase will be soon… possibly in 2025.

    What I am clear about is that Elon Musk has just created a cluster of systems with more than 100,000 NVIDIA H100 cards (each card costs about €30,000) in less than 120 days to create his own AI company. Now I ask: What is the return on investment timeframe that this company is considering? Who do you think is going to pay this cost? And most importantly, how much benefit will it bring us users to be able to pay what they need us to pay to cover their costs and get the profit they expect?

    This is why I believe that AI is currently a bubble, and within one or two years, we’ll see who remains standing and at what cost…