1. TechCrunch is launching a series of interviews to highlight women who have contributed to the AI revolution.
2. Ewa Luger is a key figure in the AI field, focusing on social, ethical, and interactional issues in data-driven systems.
3. Challenges in the male-dominated tech industry, the environmental impact of large-scale AI models, and the need for responsible AI governance are pressing issues in the AI field.
TechCrunch is launching a series of interviews highlighting women who have made significant contributions to the AI revolution to give them the recognition they deserve. Ewa Luger, co-director at the Institute of Design Informatics and BRAID program, focuses on social, ethical, and interactional issues in AI systems. She has worked with policymakers and industry to address concerns surrounding design, power distribution, exclusion, and user consent.
Luger’s most proud work includes a paper on the user experience of voice assistants, and her ongoing work with the BRAID program aiming to create a responsible AI ecosystem in the UK. She emphasizes the importance of incorporating arts and humanities knowledge into AI policy and regulation to avoid potential harms. BRAID has funded multiple projects and is planning to tackle AI literacy, resistance spaces, and frameworks for recourse.
In navigating the male-dominated tech industry, Luger highlights higher standards and expectations placed on women, as well as the need to push herself out of her comfort zone and set firm boundaries. She advises women entering the AI field to go for opportunities that allow them to level up and not be afraid to put themselves forward for roles they may not feel fully qualified for.
The most pressing issues facing AI revolve around the potential harms of AI systems, the environmental impact of large-scale models, and the need for regulations to keep pace with AI innovations. Luger emphasizes the importance of trust, veracity, and authenticity in AI use, as well as the need to address biases and ensure proper governance and stress-testing of AI systems.
To responsibly build AI, Luger recommends diversifying the field, training systems architects in moral and socio-technical issues, involving stakeholders in governance and design, and implementing mechanisms for opt-out, contestation, and recourse. She believes investors should prioritize responsible AI to build trust with users and avoid potential harms, emphasizing the alignment of values and incentives in pushing for responsible AI.