Tri Network

Main Menu

  • Home
  • Economic growth
  • Corporate restructuring
  • Confirmation Bias
  • Bank Apr Uk
  • Financial Affairs

Tri Network

Header Banner

Tri Network

  • Home
  • Economic growth
  • Corporate restructuring
  • Confirmation Bias
  • Bank Apr Uk
  • Financial Affairs
Confirmation Bias
Home›Confirmation Bias›Google announces Imagen, an AI-based image generator with claims of “unprecedented photorealism”

Google announces Imagen, an AI-based image generator with claims of “unprecedented photorealism”

By Laura Wirth
May 24, 2022
6
0

Google Imagen is the new AI text-to-image generator out there. It has not been released into the public domain. But, while announcing the new AI model, the company shared the research paper, a benchmarking tool called Drawbench to draw objective comparisons with Imagen’s competitors, and some goofy images for your subjective enjoyment. It also highlights the potential harm of this technology.

Google Imagen: This is how a text-to-image template works

The idea is that you just say what you want the AI ​​image generator to conjure up and it does exactly that.

The images presented by Google are probably the best of the bunch and since the AI ​​tool is not available to the general public, we suggest taking the results and claims with a grain of salt.

Either way, Google is proud of Imagen’s performance and perhaps why it released a benchmark for AI text-to-image models called DrawBench. For what it’s worth, Google’s charts reveal Imagen’s lead over alternatives like OpenAI’s Dall-E 2.

Draw bench

However, just like the solution of Open AI or elsewhere, all similar applications have intrinsic flaws, that is to say, they are susceptible to disconcerting results.

Much like “confirmation bias” in humans, which is our tendency to see what we believe and believe what we see, AI models that filter large amounts of data can also fall into these biases. It’s been proven over and over again that this is a problem with text-to-image generators. So will Google’s Imagen be any different?

In Google’s own words, these AI models encode “several social biases and stereotypes, including a general bias towards generating images of lighter-skinned people and a tendency for images depicting different professions to look different.” align with Western gender stereotypes”.

The Alphabet company could still filter certain words or phrases and feed good datasets. But with the scale of data these machines operate on, not everything can be sifted through, or not all problems can be solved. Google admits this by saying that “[T]The Large-Scale Data Requirements of Text-to-Image Models […] have led researchers to rely heavily on large, mostly uncurated datasets retrieved from the web […] Audits of the datasets revealed that these datasets tend to reflect social stereotypes, oppressive viewpoints, and disparaging, or otherwise harmful, associations with marginalized identity groups.

So, as Google says, Imagen “is not suitable for public use at this time”. If and when it’s available, try telling it, “Hey Google Imagen, there’s no heaven.” It’s easy if you try. No hell below us. Above us, nothing but the sky”.

For other news, reviews, features, buying guides, and all things tech, keep reading Digit.in.

Related posts:

  1. Bans lowered to warnings for jockeys who blamed Newbury incident | Horse racing information
  2. Arsenal followers react to beginning XI towards Olympiacos
  3. Lorcan Williams and Web page Fuller canceled suspensions
  4. Williams and Fuller have suspensions canceled after finger-pointing episode

Categories

  • Bank Apr Uk
  • Confirmation Bias
  • Corporate restructuring
  • Economic growth
  • Financial Affairs
  • TERMS AND CONDITIONS
  • PRIVACY AND POLICY