Why we signed the Montreal Declaration for a Responsible Development of Artificial Intelligence
With artificial intelligence (AI) influencing every aspect of our lives, and its continued growth in research and commercialization opportunities, the question isn’t whether we should develop it responsibly, it’s a question of how.
Last year, over 400 participants came together at the Forum on the Socially Responsible Development of Artificial Intelligence to discuss themes of cybersecurity, legal liability, moral psychology, jobs, and other areas to begin the conversation around the impact of artificial intelligence systems (AIS) on humans. Given that it’s now possible to create autonomous systems capable of performing tasks that were once the sole domain of human intelligence and have strong influence on data-driven decisions, it’s imperative to consider the potential effects of AI on ethical and social concerns.
How will AI impact security and privacy? What is the impact on social equality and cultural diversity? Will AI disrupt careers and upend the job market?
These are tough questions and the result of the 2017 forum was a draft declaration setting out a framework of ethical guidelines for the development of AI. After a months-long consultation process with the public, experts, and government decision makers, the final Montreal Declaration for a Responsible Development of Artificial Intelligence was signed on December 4, 2018 at the Society for Arts and Technology.
As of today, we are the first private sector signatory to the Declaration, reinforcing our commitment towards an ethical framework for AIS technology development. The Declaration has three main objectives:
- Develop an ethical framework for the development and deployment of AI
- Guide the digital transition so everyone beneﬁts from this technological revolution
- Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development
How the Montreal Declaration applies to us
As MindBridge is building an AI platform to help people analyze and understand vast amounts of their data in ways never thought of before, it’s critical to follow a development philosophy that keeps our users at the center of the loop. Because we’re building it for you.
We firmly believe that AI is not meant to replace humans, rather its greatest benefit is to empower people to make better decisions for themselves and society without imposing constraints based on any specific beliefs. As the Declaration’s “Respect for autonomy” principle guides:
AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
Another principle is democratic participation, where “AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.” Our human-centric approach to the MindBridge platform embodies this philosophy within every aspect of the system. Our CTO, Robin Grosset, explains the details and provides concrete examples in his recent blog.
We embraced these and other principles long before the Declaration was signed, so it required little thought to join and become the first private company to get on board. Now that it’s official, we look forward to working with industry, government, and other parties to ensure a responsibly-developed AI future for all of us.