Women In AI: Irene Solaiman, Head of Global Policy at Hugging Face

[ad_1]

To give AI academics and others their much-deserved – and overdue – time in the spotlight, TechCrunch is launching a interview series focusing on remarkable women who have contributed to the AI ​​revolution. We’ll be publishing several articles throughout the year as the AI ​​boom continues, highlighting key work that often remains overlooked. Read more profiles here.

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to publishing GPT-2, a predecessor to ChatGPT. After serving as head of AI policy at Zillow for almost a year, she joined Hugging Face as head of global policy. His responsibilities range from developing and leading the company’s global AI policy to conducting socio-technical research.

Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert with the Intergovernmental Organization for Cooperation and Development Economics (OECD).

Irene Solaiman, Head of Global Policy at Hugging Face

In short, how did you get started in AI? What attracted you to the field?

A completely non-linear career path is commonplace in AI. My budding interest began in the same way that many teenagers with difficult social skills find their passions: through science fiction media. I first studied human rights policy, then took computer science courses, because I saw AI as a way to work for human rights and build a better future. Being able to conduct technical research and direct policy in an area with so many unanswered questions and untaken paths makes my job exciting.

What work are you most proud of (in the field of AI)?

I take great pride when my expertise resonates with people in the AI ​​field, especially when I write about publishing considerations in the complex release and open landscape of AI systems. Seeing my paper on a Technical deployment of the AI ​​Release Gradient framework rapid discussions among scientists and used in government reports are a statement – ​​and it’s a good sign that I’m working in the right direction! Personally, some of the work that excites me the most involves cultural value alignment, which aims to ensure that systems work best for the cultures in which they are deployed. With my incredible co-author and now dear friend, Christy Dennison, who is working on a Process of adaptation of linguistic models to society was a substantive project (and many hours of debugging) that shaped security and alignment work today.

How can we meet the challenges of a male-dominated technology sector and, by extension, the male-dominated AI sector?

I found, and still find, my collaborators – from working with incredible business leaders who care deeply about the same issues I prioritize, to excellent research co-authors with whom I can start each work session with a mini therapy session. Affinity groups are extremely useful for building community and sharing advice. It is important to emphasize intersectionality here; my communities of Muslim and BIPOC scholars are continually inspiring.

What advice would you give to women looking to enter the AI ​​field?

Have a support group whose success is your success. In terms of youth, I believe she is a “girl’s girl”. The same women and allies I entered this field with are my favorite coffee dates and late night panic calls before a deadline. One of the best career tips I’ve read was from Arvind Narayan on the platform formerly known as Twitter, establishing the “Liam Neeson Principle” that you don’t have to be the smartest of them all , but possess a particular set of skills.

What are the most pressing issues facing AI as it evolves?

The most pressing problems themselves are evolving, so the meta-answer is: international coordination for safer systems for all people. People who use and are affected by the systems, even in the same country, have different preferences and ideas about what is safest for them. And the problems that arise will depend not only on how AI evolves, but also on the environment in which it is deployed; Security priorities and our definitions of capacity differ across regions, such as a higher threat of cyberattacks against critical infrastructure in more digitalized economies.

What issues should AI users be aware of?

Technical solutions rarely, if ever, address risks and harms holistically. While there are steps users can take to increase their AI knowledge, it’s important to invest in a host of safeguards against risks as they evolve. For example, I am excited about more research on watermarking as a technical tool, and we also need coordinated advice from policymakers on the distribution of generated content, particularly on social media platforms.

What is the best way to develop AI responsibly?

With the populations concerned and by constantly re-evaluating our methods of evaluating and implementing security techniques. Both beneficial applications and potential harms are constantly evolving and require iterative feedback. The ways in which we improve AI safety need to be looked at collectively as a field. The most popular model evaluations in 2024 are much more robust than the ones I was doing in 2019. Today, I am much more bullish on technical evaluations than on red-teaming. I find human assessments extremely useful, but as more evidence emerges of the mental burden and disparate costs of human feedback, I am increasingly optimistic about standardizing assessments.

How can investors better promote responsible AI?

They already are! I’m glad to see many investors and venture capitalists actively engaging in security and policy conversations, including through open letters and congressional testimony. I look forward to hearing more about investors’ expertise on what’s driving small businesses across industries, especially as we see increased use of AI in areas outside of the core tech industries.

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *