Hype is a loaded word, both in meaning and energy. It’s moving masses in today’s consumerism oriented world, in every category possible, including technology. Both for consumers and professionals alike.
Hype is not inherently bad, but after a certain threshold, it’s nauseating and blinding, creating mindless followers and fanboys of the worst kind. When hype reaches these levels, Logical discourse becomes impossible to maintain, or even start. Worse, the effect is addicting, nudging people to follow the latest hype without even understanding the last thing they’re running after.
As I have written before, I’m a big believer of craftsmanship and mastery and the prefer the benefits it brings over changing tools too often.
In recent times, I have seen two examples of this over-the-top hype: Rust (the programming language) and generative AI models (GPT family, Dall-E and friends, the usual suspects). The hype is at self-sustaining levels because of what they bring to the table, but the hype around them is pumped up at the same time to increase their influence over their respective ecosystems, disregarding the damage they do to the circles they influence. I will try to analyze them one by one.
Rust is an interesting programming language. In short, it disallows you to do things which are potentially dangerous from memory safety perspective. The borrow checker is strict, directly halting compilation when something doesn’t fit to the security model at the slightest level. It’s even so strict that, Rust has an escape hatch called
unsafe, which relaxes the security model to allow more things, however borrow checker has no switch. It’s always on. However, people are trying to bypass it for fun and profit.
The impact of Rust and its borrow checker is huge. You can’t have race conditions in your code, and it’s guaranteed to be memory safe with no leaks, no races and potentially no deadlocks. In essence, Rust’s borrow checker forces you to be mindful about how your data moves, and forces you to design your software around it, making it snug and secure. It increases development time somewhat, and makes you reinvent a couple of wheels in the process since some of your wheels doesn’t fit to Rust as is. That’s OK. Neat even.
Problem is, the excitement around Rust makes people blind. While not mapping one to one, the problems attacked by Rust are being worked on for a long time, solved with different approaches and trade-offs. However, with the hype, all this effort and other mechanisms and methods become invisible to these people. Moreover, pointing out the existence of other ways is dismissed without even understanding how they work or what they address. Discussions becomes scoffing matches, or worse, flamewars, leaving both ends bitter.
In my opinion, this drives people who want to understand Rust, but are veterans of other programming languages, out of the Rust ecosystem, because being able to objectively compare things and understand what they bring to the table is an essential part of the craft. Every software development starts with selection of correct tools for the job. Otherwise, your project becomes harder or impossible depending how off you are.
Another problem with this hype is “Rewrite it in Rust” movement, however it’s more nuanced, and deserves its own, separate blog post in the future.
There is a similar, if not bigger, hype cycle is happening in the Artificial Intelligence right now. With the advances in the hardware and computing capabilities, a couple of high impact generative models (GPT, DALL-E, Stable Diffusion) are developed and made accessible to the public. Then, everything spread like a wildfire.
This was in the making for a long time. AI, as a discipline, was not sleeping. Everybody was working hard for decades, but the computing power was not there, and with the power at hand, the barriers were removed and here we are.
However, the result is not a utopia, but something much more complex and nuanced. What we have needs to be understood and navigated. The developers of these models doesn’t want you see these parts of the story, but points you to the capabilities of these models. Summarizing web pages, analyzing reports, generating code on a whim, seeing a web page and duplicating it... The list goes on and on...
When one tries to lift the shiny lid, what there is a concoction of questionable things. Scraping of web regardless of how the content is licensed (ranging from copyleft to copyright, and everything in between), regardless of consent of the producers of said content, and doing various mental exercises to fit everything into fair use.
When it comes to image models, training or fine tuning models with known artists’ style brings another set of problems to the table, since the thing they refined is grabbed out of their hands, and their livelihood is being threatened. Moreover, this progress is even applauded by people making things harder both in the community and mentally for the affected artists.
Stripping a human being of their authenticity and telling that they are worthless is being derogatory, and traumatizing. Considering the time and effort spent by an artist to perfect their style and technique, this is one of the worst traumas to survive, and is hard to recover. Add in the passion required to endure this torturing path to mastery, and the outcome is pretty clear.
All in all, training generative models require immense amounts of material, and defending this material falls under fair use requires tons of mental gymnastics. For example, researchers in AI like to defend that their models learn like humans to claim that they’re “reading” the material like a person. However, same researchers claim that the model is not like a human and lacks conscience and other humanly traits, hence they can’t guarantee their models’ outputs’ correctness, or its honesty. As a topper, when pushed enough or correctly, many models can emit their training data verbatim, creating big problems in confidentiality, privacy, consent, and much more.
Similarly, energy consumption of these models both in training and inference is another problem, which requires its own blog post, because it’s again nuanced and needs space for discussion.
Point to these problems, and you’ll be again scoffed off, claiming being “anti-progress”, “close minded” and a “coward”. Some people claims that “AI companies are doing something amazing, hence they need no permission” to do what they are doing, and elimination of jobs and commoditization of art and other hard things are equalizing and democratizing.
Stopping people creating things by traumatizing, alienating, and forcing not to share what have they done is akin to poisoning the well you drink your water, but nobody is listening because of the hype.
There are other notable events as well, like OpenAI’s recent change of terms for allowing military use, and Eric Schmidt’s AI powered drones company for military applications. I leave these as a “reader’s exercise”, since this post is long enough in its current form.
Watching the backstage of AI research and engaging in discussions and reading about it is like seeing the sausage factory. This time, it’s way more unpleasant.
For a list of things discussed about AI, please see here.
Be aware of the hype, be mindful of what you say and do. Like everything, its overdose is deadly. Both for you and for the people around you.
Until next time,