Dedicated to ensuring artificial intelligence systems align with human values and objectives, working towards a future where AI advancement serves humanity's best interests.
Read our Scaling Alignment Blog→↓
The problem of AI alignment is focused on ensuring that artificial intelligence systems act in accordance with human values and objectives. In our lab, we're working on developing methods to make AI systems understand and adhere to these values, ensuring their actions are safe, ethical, and beneficial.
We aim to address the challenge of preventing AI from behaving unpredictably or causing harm, even if technically functioning correctly. Our goal at Kwaai is to create AI that reliably aligns with human interests and societal well-being.
Our current goals this year are working on producing a position paper and running systemic ai alignment experiments. Right now, we're primarily focused on our upcoming experiments.
Every other Thursday · 9:00 – 10:00am
Time zone: America/Los_Angeles
Ryan Steubs
Director of AI - Alignment @ Kwaai
Connect on LinkedIn→Loading posts...