Open-washing and the illusion of AI openness


False perceptions

At the heart of open-washing is a distortion of the principles of openness, transparency, and reusability. Transparency in AI would entail publicly documenting how models are developed, trained, fine-tuned, and deployed. This would include full access to the data sets, weights, architectures, and decision-making processes involved in the models’ construction. Most AI companies fall short of this level of transparency. By selectively releasing parts of their models—often stripped of key details—they craft an illusion of openness.

Reusability, another pillar of openness, is much the same. Companies allow access to their models via APIs or lightweight downloadable versions but prevent meaningful adaptation by tying usage to proprietary ecosystems. This partial release offers a calculated level of reusability that maximizes big cloud’s value extraction while minimizing the risk of competitors.

For example, OpenAI’s GPT models are accessible, but their integrations are invariably tied to specific web clients, maintenance libraries, and applications owned by the company. Enterprise developers do not receive free rein to adjust, adapt, or redistribute these models, which runs afoul of licensing agreements. One developer friend of mine put it best when he said, “This stuff is about as open as a bank vault.”

About WN

Check Also

Climate Change’s Dire Consequences Laid Bare at International Court of Justice Hearnings — Global Issues

The International Court of Justice is hearing 10 days of testimony in order to give …

Advertisment ad adsense adlogger