Artificial intelligence is creeping into every piece of technology you own. It’s in your phones, in your computers, and in your speakers, just to name a few. I guess it was inevitable that AI would come for your cameras, as well. Don’t worry—AI isn’t here to eliminate the photographer, it’s simply to make certain more complicated operations a lot easier to perform. Also, this isn’t AI in the movie sense. No cameras are going to launch a robot revolution any time soon—though the DJI RoboMaster S1 looks eerily similar to the T-1 from Terminator 3: Rise of the Machines. Anyway… The version of artificial intelligence you will see in your cameras is machine learning and it is being used mainly to improve autofocus.
What is Machine Learning?
A subset of artificial intelligence, machine learning refers to the process of having a machine develop its own operational rules based on a set of data or information. I would say it’s having a computer teach itself what to do. This is, perhaps, the most simple explanation I can come up with, in the hopes of getting the general idea across to more people who may have seen the term “AI” pop up in advertisements. If you are a machine learning/AI expert, I would love to have you post in the Comments section and leave a more detailed definition, but I’m going to move on for the sake of the article.
When we think of basic autofocus systems, say contrast-detect, it can be done relatively simply. You tell the computer to stop focusing when the selected area of the image has the greatest contrast. Then you add some extra rules, such as emphasizing objects closer to the camera. Machine-learning technology allows the camera developers to have the camera create its own rules for autofocus by showing it images and having the camera create an incredibly complex set of data from which to work and apply that information when looking at real-time images from the sensor.
An example of this can be seen in the Olympus OM-D E-M1X, one of the first cameras to tout its use of artificial intelligence. Olympus used “Deep Learning Technology” to look at images of motor vehicles, aircraft, and trains and then learn how to best track and focus on these subjects. By showing the program more and more images, it can learn what makes a better photo of these vehicles and create more specific and accurate rules to help ensure that the AF system locks on to the subject. This means that not only will the E-M1X’s Intelligent Subject Detection AF system track and hold on a fast-moving formula car, it’ll also know that it should be looking for the driver’s helmet since that is where the image should be focused.
Processors from just a few years ago had to make do with this system because they lacked the raw power to do real-time image analysis, and DSLRs couldn’t even provide a full-time video feed from the sensor. This meant the camera might focus on the front of that aforementioned formula car instead of the driver tucked away in the middle. Now, with mirrorless and modern technology, these new processes can make use of advanced processing to help focus and track the correct subject more accurately.
It would be incredibly difficult and time consuming to create the rules for a program that can do all this advanced tracking if done by an individual. By allowing the computer essentially to learn for itself, we can get it done faster and, frankly, better than a human could do. Machine learning also gets better the more data it is able to analyze. By showing it more and more photos, the algorithms can become more and more accurate. And, since it is software, a camera can be given an entirely new set of tools or functionality long after release if the underlying components can handle it. This explains why modern cameras now receive numerous firmware updates that revamp the camera’s operation.
Real-World Use and Future Potential
I already mentioned the E-M1X, but the other major player in the AI-assisted autofocus space is Sony. I’ve seen a few presentations from this company with a slide labeled “Sony x AI,” though the most recent releases are the a6600, a6100, and a few firmware updates for existing cameras. Even the companies that aren’t coming right out and saying they are using AI are likely using it to develop systems and features for their latest cameras.
Currently, at the beginning of 2020, camera systems using AI are using it in a limited number of ways. It is helping to find a subject’s eye or determine the best place to focus on a moving vehicle. In the future, this could mean that all those autofocus modes and settings you have to spend time fine-tuning just disappear. When you shoot a formula race, the camera will recognize the cars and then automatically switch the settings to track them. Then, when the race is over and you head to the winner’s circle to capture some crowd shots and portraits of the winning driver, it’ll switch to a mode that locks onto their eye and knows to hold there.
Also, while I have been focusing on mirrorless cameras that leverage AI to improve autofocus, it is worth mentioning that AI-assisted technology has already been implemented in smartphone cameras to boost image quality. The best examples of this are in portrait and night modes now found on many flagships. Portrait modes either do advanced analysis of the image to determine where the edges of the main subject are and then uses an intelligent blur algorithm to make the image appear similar to that from a traditional camera setup, such as a DSLR with a fast lens. Night mode is even more impressive, because it quickly snaps multiple images and then analyzes all the image data to create an image that is sharper and cleaner than the camera is capable of capturing on its own. This all relies on large data sets and advanced machine learning.
It is interesting to consider because advancements in photography have tended to start with smaller formats since digital took over. The small sensors found in smartphones tend to get the latest features and tech and then those changes trickle up to the APS-C and full-frame sensors found in many DSLRs and mirrorless. This can be clearly seen in Sony cameras and sensor development, because Sony is very public with both. Small smartphone sensors received “stacked” architecture, then the so-called Exmor RS sensors went up to 1"-type and were used in the RX100 series and, finally, this tech was used for the a9’s full-frame sensor.
Looking forward, it is easy to believe that cameras are going to take a bit of the processing off photographers’ shoulders with advanced technology. Smart noise-reduction algorithms may pop out cleaner raw images than previously possible. Advanced multi-shot modes may become the norm and boost resolution with ease. And someone, I’m hoping, tries something just a little bit crazy and puts out some unthinkable tech in the next few years.
Do you have any questions or concerns about the new technology being used in our cameras? Do you dislike the use of “smart” technology? Sound off in the Comments section, below!
0 Comments