In case you don’t know what Anthropic is, this is the Google and Amazon-backed AI company that owns a powerful AI model named Claude. We’re currently on the Claude 3.5 family of models. These include Claude 3.5 Sonnet, Claude 3.5 Opus, and Claude 3.5 Haiku. The Sonnet model is free to use. All you have to do is create a free account, and you’ll be able to access it on Anthropic’s website.
Claude could soon perform actions on your computer
At the moment, Anthropic is testing a new feature for its Claude 3.5 Sonnet model. This feature will allow the model to perform simple actions on your computer. The feature, called “computer use,” isn’t available to everyone just yet. It’s available as a public beta for people using the API.
With this feature enabled, Claude will be able to do simple tasks on your computer like reading the screen, moving the cursor, clicking on buttons, and even typing text. That’s pretty much the limit for the time being, as the company is still working on allowing it to perform more advanced tasks. So, don’t expect to tell Claude to Photoshop an image for you just yet.
This might sound a little bit worrying, but Anthropic has built in some safeguards to make sure that Claude isn’t doing anything it shouldn’t. For example, Claude is not allowed to interact with social media sites, which includes generating and making posts. Also, it can not access government websites, engage with election-related content, or register domain names. These are all things that would land Anthropic in trouble.
How it will work
Claude won’t observe your screen in the same way you observe it. The model isn’t digesting a live video feed of what’s going on on your computer. Rather, much like the Windows Recall feature, it’s taking snapshots of what’s on your computer and “piecing them together.”
This means that there are some events on your computer that could fall through the cracks. Some events could happen between screenshots like notifications. So, there are certain short-lived actions on your computer that Claude will miss.
It’s still in testing
As with any AI tool, there’s always the chance that it could go awry and mess up. The developers gave an example of Claude randomly looking up pictures of Yellowstone National Park. While that seems innocuous, irregular actions like these could have damaging effects if there’s something sensitive on your screen.
This is why the company is actively beta-testing this feature. It’s to receive as much constructive feedback from users before rolling it out to the public.
We don’t know when Anthropic plans to roll this feature out to…