What happened to security? Privacy?

User Avatar

The following is a guest post from John deVadoss, board of directors of the Global Blockchain Business Council in Geneva and co-founder of the InterWork Alliance in Washington, DC.

Last week in Washington DC I had the opportunity to present and discuss the security implications of AI with several members of Congress and their staffs.

Generative AI today reminds me of the internet of the late 1980s – basic research, latent potential and academic use, but it is not yet ready for the public. This time, suppliers’ unfettered ambition, fueled by minor league venture capital and galvanized by Twitter echo chambers, is accelerating AI’s Brave New World.

The so-called ‘public’ foundation models are tainted and unsuitable for consumer and commercial use; privacy abstractions, where they exist, leak like a sieve; security constructs are still a work in progress as the attack surface and threat vectors are still being understood; and the less said about the illusory guardrails, the better.

So, how did we get here? And what happened to security? Privacy?

“Compromised” foundation models

The so-called ‘open’ models are anything but open. Several vendors promote their level of openness by opening up access to model weights, documentation, or testing. Yet none of the major vendors offer anything close to the training datasets or their manifests or lineage to replicate and reproduce their models.

This opacity regarding the training datasets means that if you want to use one or more of these models, you as a consumer or as an organization have no way to know the extent of data pollution regarding IP, copyrights, etc., as well as potentially illegal content .

See also  Criminal actors impersonating NFT developers in crypto-phishing scam, FBI warns

Crucially, without the manifest of the training datasets, there is no way to verify or validate the non-existent malicious content. Nefarious actors, including state-sponsored ones, are spreading Trojan horse content on the Internet that the models ingest during their training, leading to unpredictable and potentially malicious side effects at the time of conclusion.

Keep in mind that once a model is compromised, it cannot be unlearned; the only option is to destroy it.

“Porous” security

Generative AI models are the ultimate security honeypots, because ‘all’ the data is contained in one container. In the age of AI, new classes and categories of attack vectors are emerging; the industry has yet to come to terms with the implications, both in terms of securing these models against cyber threats and in how these models are used as tools by cyber threat actors.

Vicious rapid injection techniques can be used to poison the index; data poisoning can be used to corrupt the weights; embedding attacks, including inversion techniques, can be used to extract rich data from the embedding; membership inference can be used to determine whether certain data was in the training set, etc., and this is just the tip of the iceberg.

Threat actors can gain access to confidential data through model inversion and programmatic queries; they can corrupt or otherwise influence the model’s latent behavior; and, as previously mentioned, out-of-control data capture in general leads to the threat of embedded, state-sponsored cyber activity via Trojans and more.

“Leaky” privacy

AI models are useful because of the datasets they are trained on; The indiscriminate recording of data on a large scale creates unprecedented privacy risks for the individual and for the general public. In the age of AI, privacy has become a social problem; Regulations that primarily concern individual data rights are inadequate.

See also  New Phishing Scams on Solana (SOL) Have Stolen Over $4,000,000 in Crypto Assets: Security Firm

In addition to static data, it is imperative that dynamic conversation prompts are treated as IP that must be protected and secured. If you are a consumer and you are co-creating an artifact with a model, you want your cues that drive this creative activity not to be used to train the model or otherwise shared with other consumers of the model.

If you are an employee who works with a model to achieve business results, your employer expects your instructions to be confidential; furthermore, the clues and answers need a secure audit trail in case of liability issues raised by either party. This is mainly due to the stochastic nature of these models and the variability in their responses over time.

What happens now?

We are dealing with a different kind of technology, unlike anything we have ever seen before in the history of computing, a technology that exhibits emergent, latent behaviors at scale; Yesterday’s approaches to security, privacy and confidentiality no longer work.

Industry leaders are throwing caution to the wind, leaving regulators and policymakers with no alternative but to intervene.

Source link

Share This Article
Leave a comment