Security by design | TechCrunch

[ad_1]

W
Welcome to the TechCrunch Exchange, a weekly newsletter about startups and markets. It is inspired by the daily TechCrunch+ column from which it takes its name. Want it in your inbox every Saturday? Register here.

The ability of technology to reinvent itself the wheel has its drawbacks: it can mean ignoring blatant truths that others have already learned. But the good news is that new founders sometimes figure things out on their own faster than their predecessors. — Anna

AI, trust and security

This year is an Olympic year, a leap year. . . and also THE election year. But before accusing me of American defaultI’m not just thinking about the Biden versus Trump sequel: more than 60 countries are organize national electionsnot to mention that of the European Parliament.

The direction in which each of these votes could impact technology companies; different parties tend to have different views on regulating AI, for example. But even before elections take place, technology will also have a role to play in ensuring their integrity.

Election integrity probably wasn’t on Mark Zuckerberg’s mind when he started Facebook, and maybe not even when he bought WhatsApp. But 20 and 10 years later, respectively, trust and security are now a responsibility that Meta and other tech giants cannot escape, whether they like it or not. This means working to prevent misinformation, fraud, hate speech, CSAM (child sexual abuse material), self-harm and much more.

However, AI will likely make the task more difficult, and not just because of deep fakes or by holding more bad actors accountable. Says Lotan Levkowitz, general partner at Grove Ventures:

All these trust and security platforms have this hash sharing database, so I can upload what’s a bad thing there, share it with all my communities, and everyone will end it together; but today I can train the model to try to avoid it. So even the most classic trust and security work, thanks to AI generation, becomes more and more difficult, because the algorithm can help bypass all these things.

From thinking to the fore

Even though online forums had already learned a thing or two about content moderation, there was no social media playbook for Facebook to follow when it was born, so it’s somewhat understandable that it will need some time to rise to the occasion. But it is discouraging to learn from internal meta-documents that, as soon as 2017there is still internal reluctance to adopt measures likely to better protect children.

Zuckerberg was one of five social media technology CEOs who recently appeared at a conference. US Senate hearing on children’s online safety. Testifying wasn’t a first for Meta by far, but the fact that Discord was included is also worth noting; although it has expanded beyond its gaming roots, it is a reminder that threats to trust and security can arise in many places online. This means that a social gaming app, for example, could also put its users at risk of phishing or grooming.

Will new companies appropriate more quickly than the FAANGs? It’s not guaranteed: founders often operate from first principles, which is good and bad; THE content moderation learning curve is right. But OpenAI is much younger than Meta, so it’s encouraging to hear that she’s forming a new team study child safety – although this may be a result of the intense scrutiny it is subjected to.

However, some startups do not wait for signs of difficulty to act. A provider of AI-powered trust and security solutions and part of the Grove Ventures portfolio, Active fence is seeing more incoming requests, CEO Noam Schwartz told me.

“I’ve seen a lot of people contact our team from companies that are just founded or even pre-launched. They think about the safety of their products from the design phase [and] by adopting a concept called safety by design. They build security measures inside their products, the same way today you think about security and privacy when you develop your features.

ActiveFence is not the only startup in this space, which Wired described as “trust and security as a service”. But it is one of the most important, especially since it acquired Spectrum Laboratories in September, so it’s good to hear that its clients include not only big names who are afraid of PR crises and political scrutiny, but also smaller teams who are just getting started. Technology also has the ability to learn from past mistakes.



[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *