In a recent blog post, Google has introduced their new AI that can judge your photos based on both technical and aesthetic quality. According to Google researchers, the new network “sees” the photos almost like the humans would. With time, it could get even more accurate, and its application could affect image editing processes, judging images in competitions and more.
Assessing the technical quality of the images shouldn’t be a problem for neural networks. After all, they scan the observable, pixel-level degradations. The problem with estimating the aesthetic quality is that it’s subjective and involves emotions, personal opinions, and impressions. The NIMA: Neural Image Assessment model presented on Google Blog uses a different deep CNN (convolutional neural network) than previous tools. The researchers claim it’s “trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically).”
In their paper, the researchers demonstrate some examples of the predicted grading generated by the AI and the real average grade given by humans. They also tested the system by letting it compare the quality of the same image distorted in different ways. Here are the results, with automatic rating written first and the human rating in the parentheses:
The main goal of this approach is to correlate with human perception of “aesthetically pleasing” and “high-quality” images. According to the developers, the challenge they plan to face is to test the model further and improve it with time. It could have many possible applications: enabling users to find the best photos in a batch, facilitate intelligent image editing, assist the judges in photography competitions – to name a few. The new Google’s AI made me think of Everypixel’s attempt to judge photos using neural networks. As I could conclude from my tests and people’s comments, their judging system isn’t exactly the most accurate one. But at least their suggested tags work wonderfully. Google’s algorithm seems promising for now, but it still raises some questions. I find this research pretty fascinating, and I’m sure this AI will find its applications. The most probable use I see is for automatic image enhancement, especially because it judges technical quality as well. However, I’m not convinced AI could ever replace humans when it comes to judging the aesthetical quality of the photos. Subjectivity includes emotions, taste, and opinions of individual people. Even if the average human grade marks a photo as highly aesthetically pleasing, there can be an individual who disagrees. If he or she judges a competition or looks for the best image in a batch, this probably wouldn’t work for them. What do you think? Could machines ever replace humans in judging the aesthetical quality of the photos? [via DPReview, Google Research Blog; image credits: Google Research Blog]