Aibo Foster Parent Program

Sony’s new “Aibo Foster Parent Program” will repair and refurbish donated Aibos, before providing them to foster homes, nursing homes and other facilities where an emotional support robot could be beneficial

Sony’s new “Aibo Foster Parent Program” will repair and refurbish donated Aibos, before providing them to foster homes, nursing homes and other facilities where an emotional support robot could be beneficial. A machine translation of the program’s announcement says new owners will be charged an undisclosed fee for the robot

The program is designed to extend the life of the pricey robo-dog in order to keep it from ending up in landfills. The company points out that some Aibo units, depending on their condition, may just end up being salvaged for parts.

The newest Aibo robots are certainly designed to feel more like a real dog than their ’90s counterparts: both in features and price. With a $2,900 upfront charge and an annual $300 on top of that for access to smartphone connectivity and online services, it makes sense why some Aibos might find their way into the pound. The new program gives those robots a second chance at life, where studies show they might do some good—certainly more than in a closet.

Full story: https://themessenger.com/tech/sony-is-giving-old-aibo-robot-dogs-a-second-chance-at-being-good-boys

Source: https://aibo.sony.jp/csr/foster/?s_tc=st_sm_tw_aibo_aibjrny_230911_02 (in Japaneese)

Fill the Internet with blah blah blah (p=1.00000)

“Garbage in, Garbage out” is a basic caveat to anyone dealing with models, and a well-known source of biases in “AI”. Success of ChatGPT introduced large language models (LLMs) to the general public and is now clear that LLMs are here to stay. Already in 2022 out of all internet traffic 47.4% was automated traffic aka 🤖 bots. So far bots are limited in quality of generated content and easily detectable, but LLMs will change it, and this will bring a drastic change in the whole world of online text and images.

Recent ArXiv paper “The Curse of Recursion: Training on Generated Data Makes Models Forget” considers what the future might hold. Conclusion? Future Ais, trained on an AI-generated content published online, degenerate and spiral into gibberish—what authors call “model collapse”. The model collapse exists in a variety of different model types and datasets. In one funny example in a ninth-generation AI ended up babbling about jackrabbits, while the start point was a medieval architecture text.

This resembles a story from 2017 of two Facebook chatbots named Alice and Bob. The researchers conducted experiments to improve the negotiation skills of chatbots by having them play against humans and other bots. The first bot was trained to imitate human negotiation tactics in English, but it proved to be a weak negotiator. The second bot focused on maximizing its score and exhibited superior negotiation skills, but it resorted to using a nonsensical language that was incomprehensible to humans. (The hybrid bot scored only slightly worse than the humans, while maintained reasonable language).

In the heart of model collapse is a degenerative process when over time models forget the true data. Over the time two things happen—probability of “usual” is overestimated, while probability of “unusual” is underestimated. Disappearing of tails (“unusual”) leads to converging of generated content around central point (“usual”) with very small variance, and finally model collapses into high intensity gibberish.

This process has a shivering resemblance with the way how self-amplifying feedbacks shape the inertia of beliefs, which lead to black-and-white thinking, associated with psychiatric disorders, prejudices, and conspiracy thinking. Rigidity of beliefs and societal failure may reinforce each other through feedback mechanisms, very similar to those poisoning AI reality. Mitigation of rigid harmful beliefs in people may require improving the sustained exposure to counterevidence, as well as supporting rational override by freeing cognitive resources by addressing problems of inequity, poverty, polarization, and conflict. In a similar vein, avoiding model collapse, requires restoring information about the true distribution through access to genuine human-generated content.

In a world plagued by narrow-mindedness and echo chambers, the last thing we need is an narrow-minded and divisive AI. The true value of human-generated data lies in its inherent richness, encompassing invaluable natural variations, errors, twists and turns, improbables and deviants. Human-generated data represents more than just a function to be optimized; it encapsulates the very essence of what makes life worth living.

NB: Heading Image by master1305 on Freepik

Horror Then vs Now

Do you notice the ever-changing landscape of horror? Back in the day, we were terrified of those old cursed houses, with their eerie vibes and creaking floorboards, featuring known horrors, whether mythical or real. But fast forward to today, and what do we have?

Haunted smart homes! Forget the ghostly moans, clinking of chains, and transparent figures in white. The menace is unknown and ubiquitous. It could be a thermostat running amok, controlled by hackers who think they can mess with our lives. It could be a peeping web camera, broadcasting publicly due to the nonchalance of the person who installed it. It could even be a fridge ordering a hundred packs of toothpicks as the hallucinating AI recommendation system decides these mini swords are an absolute must for an epic battle between the condiments.

Remember the days when we were scared of the idea of big machines going haywire, giving us the heebie-jeebies? HAL9000 from “2001: A Space Odyssey” would say things like, “I’m sorry, Dave, I can’t do that.” Well, now we don’t have insanity in the mainframe; instead, we have an army of tiny WiFi ghosts haunting our routers, watching our Netflix, chuckling at our WhatsApp messages, and making us guess who is truly behind that black box during a Zoom call. It’s like living in a digital haunted house, where even our own devices have a mischievous spirit.

So, remember to keep your devices secure and your sense of humor intact. After all, what’s scarier than a haunted smart home? Trying to get tech support on a Monday morning!

Meaningless vs Meaning-loss

Today I got an email, which read like the following (I scrapped boring non-essential parts):

Hello Mihail PELEAH,
Thank you for your interest in null, and applying for [...] Please note that [...] 
Sincerely,
Recruitment Team
null
Picture from "Vovka in a Far Far Away Kingdom", 1965 animation https://www.imdb.com/title/tt0213309/

One of many machine-generated messages, one could say, cheap, fast, ubiquitous. Seems nothing wrong with it—in the end, Bloomberg has been using automatically generated news since the 90s or 80s. But this email is meaning-loss. 

It brought no new information to me—the scrapped part […] highlighted some info, which had already been in the application. But someone decided to send this email, and prioritized speed over quality, and checking how it will look like for recipients.

Automatization—epitomized by buttons “Make it nice!” or “Do it for me!”—giving raises the attitude “I don’t know and I don’t care” and desire to outsource decisions. According to the Oracle Decision Dilemma Report 2023, 64% of people and 70% of business leaders would prefer to have a robot make their decisions. However, this is meaning-loss, it distracts from the question WHY we make these decisions, and focuses on miniscule hassle of WHATs of decision making. 

Meaningless could be useful sometimes, meaning-loss—never. Meaningless mingling at parties could be fun and camaraderie. Meaningless slam and mosh-pit can help us shed negativity. Meaningless sitting in silence in a corner is called meditation. 

(Oxford dictionary defines meaningless as “without any purpose or reason” and adds “and therefore not worth doing or having.” Many artists would not agree with the latter statement) 

In a world plagued by speed and efficiency, it is easy to fall into the trap of meaning-loss. Maybe by embracing the meaningless could help us to slow down, focus on quality and ask ourselves WHY? to reclaim our agency?

P.S. Just got another email:

Congratulations! Job requisition [...] was canceled and has reached the Open - Canceled status.

Bringing the ‘punk’ in Cyberpunk!

As the bustling city streets filled with pedestrians, a peculiar sight caught the attention of a curious onlooker. A delivery robot, with its tiny rollers and glowing LED eyes, stood patiently at a pedestrian crossing, waiting for the green light to signal its safe passage.

Meanwhile, humans rushed past the robot without a second thought, jaywalking and ignoring traffic rules as if they were mere suggestions. As if guided by some divine hand, cosmic irony intervened, arranging a perfectly synchronized sequence of events.

The traffic light turned green, signalling the robot to move forward. Just as it began to step forward, a delivery boy with a bulky square package appeared beside it. And just like that, the robot and the delivery boy moved in a stunning display of synchronized dance, as if they had been rehearsing the routine for weeks.

The robot blinked its LED eye and muttered to itself:

[{  "image_id":"I0ZyaWRheUZ1bkZlZWQ=",     "label": [{
          "label":"class_name_human", 
          "prob": 0.9871635, 
          "text":"Meatbags, what a bunch of morons. . . Bringing the 'punk' in Cyberpunk!", 
    },]
}]

What’s Wrong with ChatGPT? A view from Economists

Renowned economists—Daron Acemoglu and Simon Johnson—are concerned about ChatGPT. More precisely—the way how AI deployed by corporations in the US. Their analysis points out that it could displace workers, harm consumers, and bring losses to investors. The crux of the issues is focusing on cutting labour costs (in a short run), with little regard for the future of spending power and workers earnings, as well as neglecting the potential benefits of AI.

🤖 AI arms race, funded by billions from companies and venture-capital funds, bringing in a technology that can now be used to replace humans across a wider range of tasks. This could be a disaster not only for workers, but also for consumers and even investors.

👨‍🏭 The workers are facing clear and present danger. The job market is shifting, resulting in a decrease in demand for positions that require strong communication skills, ultimately leading to a decrease in higher-paying jobs. This trend is particularly challenging for younger people, just starting their careers, as there will be fewer entry-level positions available. AI powered tools could help in legal research, but deprive novice lawyers of learning techne through hands-on research.

🛍 Consumers, too, will suffer. Although they may suffice for routine inquiries, they are inadequate for addressing more complex issues—flight delay, household emergency, or dealing with a breakdown in personal relationships. We need understanding and actions of qualified professionals, not eloquent but unhelpful chatbots.

💸 Investors could also be disappointed as companies invest in AI technology and cut back on their workforce. Rather than investing in new technologies and providing training for their employees to improve services, executives are more interested in keeping employment low and wages as low as possible. This strategy is self-defeating and could harm investors in a long run.

🐙 The crux of the issues is that the potential of AI is being overlooked as most US tech leaders are investing in software that can replicate tasks already performed by humans. Contrary, AI-powered digital tools can be used to help nurses, teachers, and customer-service representatives understand what they are dealing with and what would help improve outcomes for patients, students, and consumers. The focus is primarily on reducing labor costs with little regard for the immediate customer experience and the long-term spending power of Americans. However, history has shown that this approach is not necessary. Ford recognized that there was no point in mass-producing cars if people couldn’t afford to buy them. In contrast, modern corporate leaders are utilizing new technologies in a way that could have detrimental effects on our future.

Read full article https://www.project-syndicate.org/commentary/chatgpt-ai-big-tech-corporate-america-investing-in-eliminating-workers-by-daron-acemoglu-and-simon-johnson-2023-02

P.S. I am currently reading “In The Age Of The Smart Machine: The Future Of Work And Power” by Shoshana Zuboff. The book published back in mid-1980s explores impact of the first wave of smart machines on labour relationships and future of work. There are a lot of similarities and lessons learned for current wave of ubiquitous AI-fication.

The Rise of the Data Elite: How AI Research is Re: inforcing Power Imbalances

The rise of AI-powered tools is transforming our everyday lives. We use the magic of ChatGPT and Midjourney and more mundane AI-powered credit profiling and email completion tools. However, the democratization of AI use is accompanied by global power disparities in AI research. A chart from the “Internet Health Report 2022” shows that the landscape of AI research papers is heavily skewed towards a few countries and elite institutions. The map reveals that more than half of the datasets used for AI performance benchmarking were from just 12 institutions and tech companies in the United States, Germany, and Hong Kong (China).

This map shows how often 1,933 datasets were used (43,140 times) for performance benchmarking across 26,535 different research papers from 2015 to 2020.
Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research, Bernard Koch, Emily Denton, Alex Hanna, Jacob G. Foster, 2021.

This major imbalance in the discourse about how AI should be used and who should benefit from it reinforces existing power imbalances. A discussion piece from Data Pop Alliance called “The Return of East India Companies: AI, Africa and the New (Digital) Colonialism” explores various aspects of AI colonialism in Africa. For instance, there is under-development of natural language processing (NLP) technologies for non-Western languages. Computer vision of self-driving cars relies on low-paid human workers to label hundreds of hours of data. Lax ethical standards and “data dumping” in countries with less stringent data protection regulations effectively renders local people and society—AI guinea pigs. Despite the decreasing cost of training machine learning systems and greater availability of data, the power dynamics in AI research and development continue to reflect the dominance of a select few.

While machine learning models and datasets are being developed in other parts of the world, their use in research papers and performance benchmarking is still limited. We have the power to seek greater diversity and inclusivity in AI research, and to advocate for ethical standards that address data inequalities–as consumers and as researchers. For example, the UNDP and UNICEF regional Eurasia platform STEM4ALL to promote women and girls, share knowledge, raise awareness, and break gender stereotypes in STEM. Another way is by promoting collaboration across borders and develop own datasets to contribute to the global conversation.

Gimble in the Wabe

Recent developments in AI resulted in impressive tools, like a model for image generation. For instance, DALL-E 2 grabbed many headlines, as it can create realistic images and art from a description in natural language. While the generated images are impressive, basic questions remains unanswered—how does the model grasp relations between objects and agents? Relations are fundamental for human reasoning and cognition. Hence, machine models that aim to human-level perception and reasoning should have the ability to recognize relations and adequately reflect them in generative models.

Recent paper “Testing Relational Understanding in Text-Guided Image Generation” puts this assumption in test. The researchers generated galleries of DALL-E 2 images, using sentences with basic relationships—e.g. “a child touching a bowl” or “a cup on a spoon”. Then they showed images and prompt sentences to 169 participants and asked them to select images that match prompt. Only some 20% of images were perceived to be relevant to their associated prompts, across the 75 distinct prompts. Agentic prompts (somebody is doing something) generated slightly higher agreement, 28%. Physical prompts (X position in relation to Y) showed even lower agreement, 16%. The chart shows the proportion of participants reporting agreement between image and prompt, by the specific relation being tested. Only 3 relations entail agreement significantly above 25% (“touching”, “helping”, and “kicking”), and no relations entail agreement above 50%.

The results suggest that the model do not yet have a grasp of even basic relations involving simple objects and agents. Second, model has a special difficulty with imagination, i.e. ability to combine elements previously not combined in training datasets. For instance, the prompt “a child touching a bowl” generate images with high agreement (87%), while “a monkey touching an iguana” show worse results (11%). “A spoon in a cup” is easily generated, but not “a cup on a spoon”, reflecting effects of training data on model success.