![]() Lower precision was vital to achieving high compute densities in the new AIU hardware accelerator. IBM researchers designed the AIU chip using less precision than what would be needed by a CPU. “Do we need this level of accuracy for common deep learning tasks? Does our brain require a high-resolution image to recognize a family member, or a cat? When we enter a text thread for search, do we require precision in the relative ranking of the 50,002 nd most useful reply vs the 50,003 rd? The answer is that many tasks including these examples can be accomplished with approximate computing.”Īpproximate computation played an essential role in the design of the new AIU chip. It has a term for lowering traditional computational precision – “approximate computation.” On its blog, IBM explains its rationale for using approximate computing: IBM believes that level of precision is not always needed. Historically, computation has relied on high precision 64- and 32-bit floating point arithmetic. ![]() ![]() Unfortunately, according to IBM, hardware efficiency has lagged behind the exponential growth of deep learning. Deep learning models are huge, with billions, and sometimes trillions, of parameters. In addition to growth, another problem is model size. AI and deep learning models are growing exponentially across all industries for a large range of applications. Deep learning growth is putting resource pressure on available compute power.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |