HomeWinBuzzer NewsDragGAN: New AI-Tool Outclasses Photoshop Warp with Revolutionary Features

DragGAN: New AI-Tool Outclasses Photoshop Warp with Revolutionary Features

Using a point-based editing technique allows users to make more precise modifications to specific image areas without affecting the entire image.

-

The world of Artificial Intelligence (AI) continues to astound as researchers have developed a new image manipulation system known as DragGAN, promising to revolutionize the field of image editing.

A recent scientific article titled “Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold” presents a new approach to image editing using generative priors and point-based editing. The researchers aimed to improve the precision and flexibility of deep learning models for image manipulation. Their method uses a Generative Adversarial Network (GAN) to generate a new image based on the user’s selected and modified specific areas. This approach outperforms existing methods in its accuracy and efficiency.

Point-Based Editing Technique for Precise Image Modifications

The used point-based editing technique allows users to make more precise modifications to specific image areas without affecting the entire image. This improvement paves the way for promising applications in fields like fashion, advertising, and entertainment, where custom designs and specific modifications are desirable.

The versatility of DragGAN pushes the boundaries of traditional image manipulation tools. It offers users the capability to edit images as if manipulating a 3D model, resulting in more granular control and precision. Even when manipulating complex and occluded content, such as generating teeth inside a lion’s open mouth, DragGAN delivers realistic results that follow the object’s rigidity, like the bending of a horse’s leg.

While showcasing the power of DragGAN, researchers shared impressive examples of manipulating images, from altering the pose of a dog, adjusting the height and reflections of a mountain range behind a lake, to making extensive changes to the appearance and behavior of a lion. In addition to its potent capabilities, DragGAN’s user-friendly interface stands out, allowing even those unfamiliar with the underlying technology to harness its power.

DragGAN is a remarkable collaborative effort from researchers spanning a range of esteemed institutions. Xingang Pan and Thomas Leimkühler from the Max Planck Institute for Informatics, Germany, and the Saarbrücken Research Center for Visual Computing, Interaction and AI, Germany, collaborated with Ayush Tewari from MIT CSAIL, USA, Lingjie Liu, who is affiliated with both the Max Planck Institute for Informatics, Germany, and the University of Pennsylvania, USA, and Abhimitra Meka from Google AR/VR, USA, as well as Christian Theobalt from the Max Planck Institute for Informatics and the Saarbrücken Research Center for Visual Computing, Interaction and AI in Germany.

Point-Based Editing

This advanced system utilizes point-based editing, allowing users to manipulate images by simply clicking and dragging elements within the picture. The ease of use conceals the sophistication behind the scenes, as DragGAN generates brand-new pixels in response to user input, instead of merely altering existing ones. This process enables users to deform an image with precise control over pixels’ relocation, thus manipulating the pose, shape, expression, and layout of objects.

The authors describe the technology used for DragGAN in a scientific paper which they are going to present at this year’s Siggraph 2023 Conference in August. The abstract states that,

“Through DragGAN, anyone can deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories such as animals, cars, humans, landscapes, etc. As these manipulations are performed on the learned generative image manifold of a GAN, they tend to produce realistic outputs even for challenging scenarios such as hallucinating occluded content and deforming shapes that consistently follow the object’s rigidity. Both qualitative and quantitative comparisons demonstrate the advantage of DragGAN over prior approaches in the tasks of image manipulation and point tracking.”

Built on the foundations of Generative Adversarial Networks (GANs), DragGAN brings a new level of control to image manipulation. GANs, which are already renowned for generating realistic outputs, now offer an unprecedented level of control over pixel location with DragGAN. Users can affect not just brightness and color, but also the organization and existence of individual pixels.

The Future of Image Editing with DragGAN

The researchers plan to extend this point-based editing technique to 3D generative models, promising an even wider array of applications in industries like fashion, advertising, and entertainment. The arrival of DragGAN underscores the significance of ongoing research in deep learning and its practical applications, setting the stage for an exciting future in image editing and manipulation.

Given the early promising results and potential applications, the researchers highlight the need for continued funding to explore and optimize this potent technology further. With DragGAN, the horizon of AI-assisted image editing is expanding, ushering in a new era of creative possibilities.

Markus Kasanmascheff
Markus Kasanmascheff
Markus is the founder of WinBuzzer and has been playing with Windows and technology for more than 25 years. He is holding a Master´s degree in International Economics and previously worked as Lead Windows Expert for Softonic.com.