Jan 30, 2025

Replacing Our AI Privacy Expectations: Teenage Angst and the FIPPs

As a kid who grew up in the 90s, I’m not ashamed to say I’m a Dave Matthews Band fan.

Nick Reese
January 30, 2025
As a kid who grew up in the 90s, I’m not ashamed to say I’m a Dave Matthews Band fan. Not that this was then or has been since useful, but there are not many song lyrics on albums ranging from Recently to Busted Stuff (to include the Lillywhite Sessions ) that I don’t know. The undeniable 90s make out song, Satellite, includes the chorus line “Everything good needs replacing,” which turns out to be informative when thinking about privacy in AI. And about the FIPPs. An undisclosed number of years later with more than a little emerging technology policy, teaching, and building under my belt, one of the things I noticed is the propensity for emerging technology to invalidate things like risk management frameworks, response processes, and even policies and laws. That’s really the core of the problem, isn’t it? We have these great documents, processes, and frameworks that help us understand the risks we face, but when those artifacts become outdated, we end up in a scramble. It’s not that new technologies are inherently bad, but they cause us to rethink what we were once comfortable with. It’s not that we were wrong, it’s that…everything good needs replacing. Thanks, Dave. In AI, one of the major angst-inducing points is data and privacy. We live in a moment where the major AI companies with which we are all most familiar are synonymous with collecting the entirety of digitized human knowledge with no regard for copyright, intellectual property, or privacy and pumping it all into their multi-trillions of parameters AI models. This technology causes us to become uncomfortable with how we aggregate data online and is correctly making many rethink the privacy of data. It naturally follows that we examine the documents, policies, laws, and frameworks around data privacy. In the US, we are living in the absence of large monolithic privacy legislation like the EU’s GDPR . But the US has a significant history in data privacy dating back to the 1970s. Many may not realize, but the Fair Information Practice Principles (FIPPs) are a generally accepted guideline that many use to formulate the basis of privacy policies and legislation. In the age of AI, do the FIPPs still apply? Can they survive in their current form, or do they also need replacing? Share FIPPs The Federal Privacy Council (FPC) describes the FIPPs this way: Rooted in a 1973 Federal Government report from the Department of Health, Education, and Welfare Advisory Committee, “Records, Computers and the Rights of Citizens,” the Fair Information Practice principles (FIPPS) have informed Federal statute and the laws of many U.S. states and foreign nations and have been incorporated in the policies of many organizations around the world. They are critical to how the government approaches information management, especially information about people. While the precise expression of the FIPPs has varied over time and in different contexts, they have always retained a consistent set of core principles that are broadly relevant to agencies’ information management practices. It goes on to say that the FIPPs are not requirements but principles that agencies use when making privacy policies. They have, in other words, evolved over time. Here are the principles themselves: Access and Amendment Accountability Authority Minimization Quality and Integrity Individual Participation Purpose Specification and Use Limitation Security Transparency Looking at privacy policies and laws since 1973, you can see reflections of FIPPs in those documents. These are principles with which few would argue. For example, Minimization states that, “Agencies should only create, collect, use, process, store, maintain, disseminate, or disclose PII that is directly relevant and necessary to accomplish a legally authorized purpose, and should only maintain PII for as long as necessary to accomplish the purpose.” Don’t collect more than you need. Don’t keep it for longer than you need it. Good advice. FIPPs in the Age of AI Reading the FIPPs can make those especially paranoid about their privacy feel a little better. Having the ability to access and amend your data is comforting. A use limitation and minimization overseen by security and transparency feel good too. But how much of this occurs today in a data heavy AI world? Taking these principles from paper to practice tells a different story. The FIPPs are not binding and are specifically targeted toward the federal government so there should be no expectation for a private entity, who is in no way bound by the FIPPs, to abide by them. Even understanding that, a stark picture is painted if you look at the FIPPs through the lens of the big-name AI companies and what their information practices look like. The federal government is an end user of AI, not an AI developer, with very few exceptions. How it uses its own AI and the data it feeds into it will almost certainly be subject to the FIPPs. However, since they are not developers, AI is not being developed in a way that is consistent with FIPPs because none of the major developers have any obligation to do so. The end product is a system that is not designed for privacy and needs to be adjusted to fit inside privacy regimes. There’s a big difference between being designed for privacy and being adjusted to meet the minimum standards. Subscribe now What Needs Replacing? While the problem in emerging technology often revolves around updating outdated documents and frameworks, the data privacy issue is not the same. The FIPPs have evolved over 50 years of privacy practice into a useful and implementable framework for creating privacy policies that make sense for the specific use case, yet most people have not heard of them. What needs replacing is our expectations of privacy and what we demand from the marketplace. We should not adjust our view of privacy or assume we have to trade privacy for a service the way we have with social media for decades. Instead, we should stand on our commitment to data privacy and see such a future as possible. We should get reacquainted with the FIPPs and start to understand how each principle applies to the use of various AI applications we might use. Privacy is not to be compromised nor is it a reason to simply swear off AI. Many organizations deal with sensitive or regulated data, and it is not possible for them to use an AI system that is NOT privacy preserving creating an artificial barrier to entry. Many of those organizations perform functions and services that are critical to our lives, and they deserve the best technology. Instead of replacing 50 years of privacy wisdom, we need to replace the AI that indiscriminately vacuums our data in the absence of sweeping privacy legislation. The absence of “no” is not a “yes.” Privacy preserving AI is not an abstract term but a technical reality. Privacy preserving AI must be designed with the FIPPs in mind . You should be able to read each of the FIPPs and see exactly how your AI system conforms to and enhances those principles. That is likely not true for most organizations, but it should be. Leaders considering an AI implementation should determine whether they are facing an artificial barrier to entry that could be overcome by asking the right questions. Leaders across government and the private sector should ask themselves the following questions: How important is data privacy to my customers and mission? What regulatory or compliance regimes am I subject to? Can I clearly see how my AI deployment enhances the privacy, regulatory, and/or compliance requirements I have? Are there other AI products on the market specifically built for privacy, regulatory, and/or compliance environments? Is an on-premise AI solution a better option than a multiparty cloud? So, as you search for old Dave Matthews Band songs you forgot about on your Spotify, consider the data in your organization. Consider how you and those you serve expect your data to be handled. Do the FIPPs need replacing or do our expectations? Connect with us: LinkedIn , Bluesky , X , Website To learn more about the services we offer, please visit our product page. Nick Reese is the cofounder and COO of Frontier Foundry and an adjunct professor of emerging technology at NYU. He is a veteran and a former US government policymaker on cyber and technology issues. Visit his LinkedIn here . This post was edited by Thomas Morin, Marketing Analyst at Frontier Foundry. View his Substack here and his LinkedIn here . Leave a comment