Unprecedented torrents of data flood out of research labs on a continual basis, but making sense of it all remains a major scientific bottleneck. How software is evolving to transform this data deluge into knowledge is the topic of a news story in Chemical & Engineering News, the weekly newsmagazine of the American Chemical Society.
Rick Mullin, senior editor at C&EN, points out that statistical models have allowed researchers to reduce the number of experiments they run, but 40 percent are still unnecessarily repeated due to inefficient experimental design or inadequate information technology. To tame the situation, multiple companies have stepped up with new software to help researchers gain control over the massive data that today’s high-throughput labs produce.
The article describes the latest advances in this area, which include new ways to search, access, visualize and analyze data. Some software allows scientists to aggregate and analyze information from various sources. Others combine technology with consulting services to convert raw data into a storage format that’s vendor-independent. While software providers — and thus the scientists, science and the public benefiting from them — have made considerable progress, advanced informatics that can function at the scale of tens of thousands of variables is still in the works.