AI Blackbox opened

Anthropic's AI exploration 👀, Open AI eyes own browser

In Today’s Edition:

  • Anthropic’s exploration into AI working 👀

  • Open-AI shows its interest in owning a browser

  • Humans’ politeness causes Open-AI to spend millions of dollars 😂

  • Student suspended from the IVY league for unethical use of AI đŸ˜±

Anthropic’s AI Exploration

AI is a well-known black box! Nobody knows what’s going on inside it. It’s this nature that makes it highly unpredictable and untrustworthy. Heavy research is going on into explainable AI, a class of AI models that can explain the reasoning behind the answer they generated.

While this is going on, Anthropic has achieved a breakthrough. It has figured out how to look at what’s happening inside the existing models by tracking millions of parameters. It has published a comprehensive paper on the findings, which is very interesting. Here are some key Insights:

  1. AI models plan the words/text to use in the sentences many words ahead, even though they are trained to predict the next words

  2. It tends to agree with the user more, even though the information is incorrect

  3. While training, it arrived at a shared space of features, the same word in multiple languages may have similarity in vectors/nodes getting turned on while using it.

To read the full report, check out Anthropic’s Paper: Mapping the kind of AI

Top Players show interest in Chrome

Google is facing an antitrust trial for Chrome(#1 search engine) for its illegal ways of having a monopoly in the segment and curbing competition. A Judge even suggested Google sell the browser and share data to enable fair competition in the segment. Google has flagged it as unprecedented and remarked that even Google cannot recreate it.

Amid this chaos, AI giants like Perplexity and Openai have openly shared their interest in owning a browser (If possible, buy Google Chrome). A search engine generates huge amounts of data, which is needed by the AI companies.

  • Is it safe to put these tools in their hands?

  • Will the companies abide by the safe usage policy?

Human Politeness - The Main cause for Openai losses

In an interesting and funny turn of events, humans respond to GPT models like “Thank you”, “You are a helpful model”, “Thanks for the help” as a habit. But to generate responses to this message GPT model needs to burn a lot of energy and a more costly GPU. When a user asked this on X(formerly Twitter), Sam Altman replied that they lost millions of dollars to the computational power to respond to these messages.

Student suspended for unethical use of AI

In a recent incident, the top 5 tech giants have rolled back offers to a few candidates after learning they have used AI to cheat. A student from Columbia University has claimed that he created software designed to cheat coding interviews.

The tool, which remained undetected by popular conferencing and proctoring apps, uses speech-to-text and generates codes to assist the candidate in interviews. Companies have filed a complaint with the university, which initiated disciplinary action. The student defended himself by saying that his tool highlights loopholes in current technology and recruitment methods. The university, however, suspended him.

Companies are increasingly thinking of returning to in-person interviews given such event. It is up to us to use AI responsibly and ethically.

Find us on social media:

If you have any queries or suggestions, write to us at [email protected].

Consider sharing this and recommending this to your friends!

Thanks and regards,
Pruthvi Batta
AI ML Universe