Mon Oct 30

  • UN Launches AI Advisory Board

  • LAION's 'Open Empathic': Emotion-Detecting AI's Promise and Peril

  • OpenAI's 'Preparedness' Team

  • Nightshade: Artists' New 'Paw-some' Weapon Against AI Art Heists

UN Launches AI Advisory Board: The United Nations is diving headfirst into the world of artificial intelligence. They've just unveiled a 39-member AI advisory board, consisting of experts from government, academia, and top-tier industries like Alphabet/Google and Microsoft. Their mission? To offer insights and recommendations on AI's international governance. This board will serve as a liaison for various AI initiatives undertaken by the UN. With talks underway, the UN aims to consolidate these recommendations by summer 2024, timing it with their "Summit of the Future" event. Their approach leans towards a constructive and positive stance on AI, emphasizing its potential in aiding the Sustainable Development Goals and enhancing international cooperation. However, with AI's growing power comes responsibility. Concerns about AI's misuse, especially in spreading misinformation, loom large, highlighting the need for comprehensive governance.

LAION's 'Open Empathic': Emotion-Detecting AI's Promise and Peril: LAION, birthed by German teacher Schuhmann and AI enthusiasts in 2021, is set on democratizing AI research, starting with training data. Their project, Open Empathic, uses a platform where volunteers annotate YouTube clips to capture a person's emotions, age, gender, accent, and more. They aim to train AI models that genuinely understand various languages and cultures. While they hope to accumulate up to 1 million samples by next year, the challenge is ensuring the data is free from biases. Past LAION datasets had issues, including inappropriate images and biases. The potential misuse of emotion-detecting AI has been a concern, with some calling for bans. However, LAION remains committed to an open-source approach, believing that community oversight will ensure transparency and safety. Yet, as with all potent tools, the balance between benefit and misuse remains to be seen.

OpenAI's 'Preparedness' Team: OpenAI is ramping up its defenses against potential AI threats with the introduction of a new team named "Preparedness." Led by Aleksander Madry, previously of MIT and now donning the title "head of Preparedness," the team's mission is to monitor, predict, and mitigate dangers posed by future AI systems. These range from AI's persuasive powers in scams to its capability to produce harmful code. Surprisingly, OpenAI's list of AI-related concerns extends to "chemical, biological, radiological, and nuclear" threats. While OpenAI CEO Sam Altman has often expressed apprehensions about AI's potential risks, this move showcases the company's commitment to exploring even sci-fi-esque threats. In tandem with the team's launch, OpenAI is encouraging the public to suggest potential AI risks, incentivizing the best ideas with a cash prize and a job offer. The ultimate goal? Ensuring the safe evolution and deployment of increasingly advanced AI models.

Nightshade: Artists' New 'Paw-some' Weapon Against AI Art Heists: Ever felt like pulling a fast one on AI? University of Chicago researchers have debuted Nightshade, an artsy tool that's making AI see cats when it should see dogs. While we humans see the usual, AI models get bamboozled by subtle pixel tweaks. The aim? Give artists a way to protect their masterpieces from being AI's unauthorized training data. Imagine an AI model confidently presenting a feline when you asked for a canine. Quite the "ruff" day for AI, eh? Stemming from their prior AI-confuser, Glaze, Professor Ben Zhao's team hopes to give power back to artists. But for AI developers, it's a game of "spot the poisoned pixel" - a tricky task when the changes are undetectable to the human eye. As for artists, Nightshade might just be their knight in shining armor against AI art pilfering. 🎨🐾

Refer us to your friends and get some cool and exclusive swag!!!