A month after its announcement, Google has implemented Lens in its image search. The feature was previously available via an app or the google camera, with specific enhancements for Google’s Pixel line of phones.

For general users, it means the ability to make use of machine learning from their browser. Google Lens uses AI to detect images and provide similar results, letting users discover objects to purchase and more.

It can also identify¬†landmarks, providing users with additional¬†information without them having to search for it. So far, Google hasn’t mentioned Lens’ text detection capabilities, but it’s possible you’ll be able to search the text from an image.

You can target specific parts of an image by simply circling it, letting you pick a sofa out of a living, room, for example. The feature is generally pretty good but can struggle with more obscure items.

More Regions Coming Soon

Like Google’s search by image feature, it also seems to prioritize the color over the object, so you can get strange results. It’s rarely good enough to find the exact item you’re pointing at it, but it can get pretty close with some objects.

US users can use the feature via the new Lens button in Google Images. Google says it will come to other countries soon, marking a huge upgrade in its visual search.

Lens is also available in Google Photos and Google Assistant, with the Pixel 3 displaying information for its users in real time. It shares some similarities with Microsoft’s Office Lens and Snip Insights, which provide text detection and landmark identification after an image is taken.