In defense of the black box
Abstract
Get full access to this article
View all available purchase options and get full access to this article.
References and Notes
Information & Authors
Information
Published In

5 April 2019
Copyright
Submission history
Acknowledgments
Authors
Metrics & Citations
Metrics
Article Usage
Altmetrics
Citations
Export citation
Select the format you want to export the citation of this publication.
View Options
Get Access
Log in to view the full text
AAAS login provides access to Science for AAAS members, and access to other journals in the Science family to users who have purchased individual subscriptions.
- Become a AAAS Member
- Activate your Account
- Purchase Access to Other Journals in the Science Family
- Account Help
More options
Purchase digital access to this article
Download and print this article for your personal scholarly, research, and educational use.
Buy a single issue of Science for just $15 USD.
View options
PDF format
Download this article as a PDF file
Download PDF






Black-box in AI is not black box
Elizabeth A. Holm wrote an article entitled "In defense of the black box " (1). Holm of CMU is confused with intentional or ignorant black box problems. Many researchers don't know how to convert deep learning which is called black-box into an explainable decision tree. Geoffrey Everest Hinton, former faculty of CMU (1982–1987) has proposed an idea on how to eliminate the black-box problem (2). Hinton's algorithm, called the soft decision tree converted from deep learning has been implemented in several open source sites (3, 4). Not only in deep learning based on GPU computing, but also in ensemble methods based on CPU computing, the explainable decision tree function has been implemented in open source machine learning including scikit-learn. In other words, the black box problem in AI can be eliminated if we would like to do. The intentional black box in AI system is another issue.
References:
1. E. A. Holm, Science 05 Apr 2019: Vol. 364, Issue 6435, pp. 26-27
2. https://arxiv.org/abs/1711.09784
3. https://github.com/kimhc6028/soft-decision-tree
4. https://github.com/AaronX121/Soft-Decision-Tree
RE: In defense of the black box
Elizabeth Holm nods to Douglas Adams and his notional 'Deep Thought' computer in her defense of 'black box' computation. However, Professor Holm makes little effort to help users of 'black box' algorithms to distinguish between trustworthiness and truthiness. Even official regulation of commercial 'black box' implementations can fail (q. v., Boeing's 737 Max flight control software) if appropriate engineering choices are overridden by managerial demands.
At the very least, any and all 'black box' software should be distributed and utilized in a state of maximum transparency. This means the developer should provide software documentation as well as use cases. And the user, particularly for publication of results, should provide a full suite of parameter selection and analysis choices. In other words, scientists and engineers must be willing to provide the clarity that that the scientific method demands.
RE: In defense of the black box
Dear Colleagues,
As with many things, that esteemed philosopher of science, Richard Feynman, said it best. On numerous occasions, he remarked on how much fun it is to think about the puzzles and problems of science. Will AI take the fun out of thinking about the puzzles of science if we don't understand why the answer which the black box give us is the way it is?