Fraud News

IBM’s Breakthrough Distributed Computation for Deep Learning Workloads (Update)

Author: Kevin leen / Source: gearsofbiz.com

NEWS ANALYSIS: Why deep learning is a literal ‘killer app’ for computers, and how IBM has figured out how to distribute computing for much faster processing of big-data artificial intelligence workloads.

Off the top, it sounds simple enough: You have one big, fast server processing an artificial-intelligence-related, big data workload. Then the requirements change; much more data needs to be added to the process to get the project done in a reasonable span of time. Logic says that all you need to do is add more horsepower to do the job.

As Dana Carvey used to say in his comedy act when satirizing President George H.W. Bush: “Not gonna do it.”

That’s right: Until today, adding more servers would not have solved the problem. Deep-learning analytics systems up to now have only been able to run on a single server; use cases simply haven’t been scalable by adding more servers, and there are major backend reasons for that.

All that is now history. IBM on Aug. 8 announced that its researchers have changed this by coming up with new distributed deep learning software that has taken quite a while to develop. This is very probably the biggest step forward in artificial intelligence computing in at least the last decade.

Connecting Servers for AI Jobs Sounds Easy, but Isn’t

By merely being able to connect a group of servers together to work in concert on a single problem, IBM Research has uncovered a milestone in making Deep Learning much more practical at scale: to train AI models using millions of photos, drawings or even medical images and by increasing the speed and making significant gains in image recognition accuracy possible as evidenced in IBM’s initial results.

Also on Aug. 8, IBM released a beta version of its Power AI software for cognitive and AI developers to build more accurate AI models to develop better predictions. The software will help shorten the time it takes to train AI models from days and weeks to hours.

What exactly makes deep learning so time-consuming to process? First of all, it involves many gigabytes or terabytes of data. Secondly, the software that can comb through all of this information is only now being optimized for workloads of this kind.

One thing a lot of people haven’t yet gotten straight is what sets deep learning apart from machine learning, artificial intelligence and cognitive intelligence.

Deep Learning a Subset of Machine Learning

“Deep learning is considered to be a subset, or a particular method, within this bigger term, which is machine learning,” Sumit Gupta, IBM Cognitive Systems Vice-President of High Performance Computing and Data Analytics, told eWEEK.

“The best example I always give about deep learning is this: When we’re teaching a kid how to recognize dogs and cats, we show them lots of images of dogs, and eventually one day the baby says ‘dog.’ The baby doesn’t look at the fact that the dog has four legs and a tail, or other details about it; the baby is actually perceiving a dog. That’s the big difference between the traditional computer models, where they were sort of ‘if and else’-type models versus perception. Deep learning tries to mimic…

Click here to read more

Click Here to Read More

Click to comment

Leave a Reply

Your email address will not be published.

Most Popular News

To Top