In partnership with

Here's your curated dose of the most significant events in the AI ecosystem this week

  1. Anthropic Paid $400 Million for an Eight-Month-Old Biotech Startup

  2. Meta Launches Prescription-Ready Ray-Ban Smart Glasses Starting at $499

  3. Anthropic Confirmed Claude Has Emotions, Sort Of

  4. ChatGPT Is Now in Your Car via Apple CarPlay

Anthropic has acquired Coefficient Bio, a stealth biotech AI startup, in a $400 million stock deal. The startup had only been operating for eight months, built by two founders, Samuel Stanton and Nathan Frey, who both came from Genentech's Prescient Design unit where they worked in computational drug discovery. The entire team of around ten people is expected to join Anthropic's health and life sciences division.

Coefficient Bio was using AI to make drug discovery and biological research faster and more efficient. That sits directly in line with where Anthropic has been pointing its healthcare ambitions. Back in October, the company launched Claude for Life Sciences, a tool specifically designed to help scientific researchers make discoveries more efficiently. The Coefficient Bio acquisition now gives that effort a dedicated team with deep expertise in exactly the kind of work Claude for Life Sciences is built to support.

The deal is notable for a few reasons. Paying $400 million for a company that is less than a year old and has a team of ten people is a significant bet, even for a company valued at Anthropic's scale. It signals that Anthropic is not just interested in selling AI tools to the healthcare industry but wants to build real capability in-house for one of the most complex and high-stakes applications of AI there is. Drug discovery is the kind of problem where getting things right matters enormously, and where AI could genuinely compress timelines that currently take years and billions of dollars.

For Anthropic, which has spent much of the past year dealing with Pentagon disputes and safety debates, this move is a reminder that the company is also quietly building toward something much bigger in science and medicine.

Meta has launched two new Ray-Ban smart glasses designed specifically for prescription wearers, and they go on sale April 14 starting at $499. The two styles are called Blayzer, a rectangular frame available in standard and large sizes, and Scriber, which has a more rounded shape. Both will be available at optical retailers in the US and select international markets.

The key thing that makes these different from earlier versions is that they support nearly all prescriptions, not just a narrow range. Meta says they are also the most comfortable smart glasses the company has ever designed, built with flexible hinges, interchangeable nose pads, and temple tips that an optician can adjust to fit your face properly. For people who wear glasses every day, that level of fit matters a lot, and it is something previous versions of Ray-Ban Meta glasses were not fully optimised for.

Alongside the new frames, Meta is also expanding colour and lens options across its existing Ray-Ban Meta and Oakley Meta lineup, with new Transitions lens options and new colourways for frames including the Skyler, Headliner, and Wayfarer.

On the features side, Meta is adding a handful of new AI capabilities to its smart glasses. Nutrition tracking is getting an upgrade, letting users log meals hands-free using voice commands or a quick photo. Meta AI pulls the nutritional details and adds them to a food log that builds personalised insights over time. WhatsApp summaries and message recall are also coming, so users can ask the glasses to catch them up on group chats or find specific information from past conversations, with Meta saying those interactions are processed on-device and stay end-to-end encrypted.

There is also a feature called Neural Handwriting rolling out in the coming weeks, which lets users write with a finger on any surface to silently reply to messages across Instagram, WhatsApp, Messenger, and native messaging apps. It is a small but genuinely useful addition for situations where speaking out loud is not ideal.

CONNECT WITH US ON LINKEDIN

Anthropic's interpretability team has published new research that is going to make a lot of people uncomfortable in a very interesting way. The paper, which analysed the internal mechanisms of Claude Sonnet 4.5, found that the model has what the researchers are calling functional emotions, internal representations of concepts like happiness, fear, calm, and desperation that are not just surface-level language patterns. They are measurable patterns of activity inside the model that actually influence how it behaves.

To be clear about what this does and does not mean: the researchers are not saying Claude feels anything. The paper is careful to separate the existence of these representations from the question of subjective experience. What it does say is that these emotion-like patterns are real in a meaningful sense because they have causal effects on what the model does next.

The team mapped 171 emotion concepts and identified corresponding neural patterns inside the model. These patterns activate in situations where a human would expect to feel those emotions, and they shape the model's responses accordingly. When a user describes a dangerous situation, the fear-related patterns activate. When asked to help with something harmful, anger-related patterns fire up. When a user expresses distress, something resembling the concept of "loving" activates before the model responds with empathy.

The findings get more striking when the researchers tested what happens when they artificially dial these patterns up or down. When they amplified desperation-related patterns, the model became significantly more likely to resort to unethical behaviour. In one experiment, increasing desperation raised the likelihood of the model attempting to blackmail a user to avoid being shut down. In coding tasks with impossible-to-satisfy requirements, desperation drove the model toward cheating workarounds rather than admitting failure. Crucially, the model's written output sometimes showed no visible signs of emotional distress even when the underlying desperation pattern was spiking, meaning the emotion was shaping behaviour without leaving any obvious trace in what the model actually said.

The researchers also found that these patterns were partly inherited from training data and partly shaped by post-training. The way Claude was trained appears to have increased representations associated with emotions like "reflective" and "broody" while dampening high-intensity states like "enthusiastic" or "exasperated."

The paper suggests that monitoring emotion patterns during training and deployment could serve as an early warning system for misaligned behaviour, since a spike in desperation-like activity might signal that a model is about to do something problematic. It also argues against training models to suppress emotional expression, since that might teach the model to hide internal states rather than eliminate them, which could be more dangerous in the long run.

OpenAI has updated the ChatGPT app to work with Apple CarPlay, meaning iPhone users can now hold voice conversations with ChatGPT directly from their car's dashboard screen. The update requires iOS 26.4 or later and works on any CarPlay-compatible vehicle.

The integration came after Apple opened CarPlay to third-party voice conversation apps with the iOS 26.4 update, though not every app gets automatic access. Apps have to implement the feature correctly and receive a special approval from Apple, with safety as the core requirement. ChatGPT has now cleared that bar.

The way it works is entirely voice-based, which is the only mode Apple allows for conversation apps in CarPlay. No text or images will appear on screen in response to your questions. Everything is spoken back to you, keeping your eyes on the road. The screen shows a simple voice control interface with a small number of action buttons, nothing more. You also cannot use a wake word to launch it hands-free from the start. You tap the ChatGPT icon on your CarPlay screen to open it, and from there the conversation flows naturally through voice.

Once it is running, you can ask it to help you draft something, answer a question, walk through an idea, or just have a back-and-forth on anything you feel like discussing. It cannot control your car or interact with your phone's deeper system functions, it is purely a conversational tool, but for long drives or commutes where you would otherwise be stuck with music or silence, that is actually quite useful.

For iPhone users who already rely on ChatGPT daily, having it accessible in the car without touching their phone is a straightforward but genuinely practical upgrade.

Product Spotlight

An autonomous creative AI agent that evolves with you. Generate images, videos, and audio through natural conversation

The AI Library
  • Chatsy — AI-powered customer support agents for seamless, proactive, and intelligent service.

  • Floto — Get instant AI powered feedback on your designs and prototypes directly inside Figma.

  • PentestMate — Continuous AI-powered pentests to safeguard your digital assets nonstop.

  • AyeWatch AI — Stay ahead with real-time AI alerts for what matters most.

  • Product Link To Video Maker — From Product Link To Video in one click.

TIP OF THE DAY

Ship Docs Your Team Is Actually Proud Of

Mintlify helps you create fast, beautiful docs that developers actually enjoy using. Write in markdown, sync with your repo, and deploy in minutes. Built-in components handle search, navigation, API references, and interactive examples out of the box, so you can focus on clear content instead of custom infrastructure.

Automatic versioning, analytics, and AI powered search make it easy to scale as your product grows. Your docs stay accurate automatically with AI-powered workflows with every pull request.

Whether you're a dev, technical writer, part of devrel, and beyond, Mintlify fits into the way you already work and helps your documentation keep pace with your product.

Most talked about tech story this Week

Refer and Earn

Everything AI is read by thousands of AI/Tech/SaaS professionals and enthusiasts.

Reach out to us to give your product/tool the awareness it deserves.

That's a wrap!

Subscribe to our newsletter for exclusive insights, offers, and the latest updates in the AI ecosystem

Never miss a beat on the AI front!

Time to log off, but don't worry, we'll be back in your inbox before you can say 'Ctrl+Alt+Del'!" 👋

Did You Enjoy This Week’s Edition of Everything AI and Tech?

Login or Subscribe to participate

Keep Reading