Geneva Macro Labs with Paul Wang on AI and governance

Me at talk for interesting folks at Geneva Macros Labs on December 9 2022. LinkedIn post from them here:

How would #techgovernance differ if the earning of #societal #trust were considered and systematically incorporated? This was Hilary Sutcliffe’s opening question of her presentation at our last webinar on #AIgovernance and trust. Hilary, director at SocietyInside and of the TIGTech - Earning Trust in Tech Governance Initiative, is known for her “trust-thought of the day” campaign.

📺 Please review our debate here: https://lnkd.in/gkcs3Jbc

Driven to find out what happens if we include efforts towards #trustworthiness and earning trust in AI governance institutions and design, Hilary and her colleagues analysed seven #trust drivers. They build on the fact that AI governance remained for quite a number big corporates an afterthought or worse. In such evolving environment and fast changing technological intelligence era, #humanrights law, privacy and data #law already cover a lot for AI governance.

Key takeaways
👉🏾 Because of the fact that governments struggle with AI law enforcement, a first starting point could be seen in #softlaw. Citizens have trust in governance the most when they know it’s there and can see it is working. Hence, #communication around AI governance is indispensable.
👉🏾 Soft law is mainly #selfgovernance by companies. It works best when regulations are about to come. Evidence suggests that the threat of regulations can kick off a true hype of saft law.
👉🏾 A more #collaborative communicative tech governance by #regulators benefits from three competencies: (1) providing evidence of trustworthiness (a new approach to communication), (2) building trusted environments for collaborative governance, and (3) involving citizen in tech governance.

Previous
Previous

How to earn trust and avoid distrust in the public service

Next
Next

OECD - Neurotechnologies - deliberation, stewardship & trust