Skip Navigation

The "Nothing to hide" argument is a logical fallacy

In my post on why mass surveillance is not normal, I referenced how the Wikipedia page for the Nothing to hide argument labels the argument as a "logical fallacy." On October 19th, user Gratecznik edited the Wikipedia page to remove the "logical fallacy" text. I am here to prove that the "Nothing to hide" argument is indeed a logical fallacy and go through some arguments against it.

The "Nothing to hide" argument is an intuitive but misleading argument, stating that if a person has done nothing unethical, unlawful, immoral, etc., then there is no reason to hide any of their actions or information. However, this argument has been well covered already and debunked many times (here is one example).

Besides the cost of what it takes for someone to never hide anything, there are many reasons why a person may not want to share information about themselves, even if no misconduct has taken place. The "Nothing to hide" argument intuitively (but not explicitly) assumes that those whom you share your information with will handle it with care and not falsely use it against you. Unfortunately, that is not how it currently works in the real world.

You don't get to make the rules on what is and is not deemed unlawful. Something you do may be ethical or moral, but unlawful and could cost you if you aren't able to hide those actions. For example, whistleblowers try to expose government misconduct. That is an ethical and moral goal, but it does not align with government interests. Therefor, if the whistleblower is not able to hide their actions, they will have reason to fear the government or other parties. The whistleblower has something to hide, even though it is not unethical or immoral.

You are likely not a whistleblower, so you have nothing to hide, right? As stated before, you don't get to make the rules on what is and is not deemed unlawful. Anything you say or do could be used against you. Having a certain religion or viewpoint may be legal now, but if one day those become outlawed, you will have wished you hid it.

Just because you have nothing to hide doesn't mean it is justified to share everything. Privacy is a basic human right (at least until someone edits Wikipedia to say otherwise), so you shouldn't be forced to trust whoever just because you have nothing to hide.

For completeness, here is a proof that the "Nothing to hide" argument is a logical fallacy by using propositional calculus:

Let p be the proposition "I have nothing to hide"

Let q be the proposition "I should not be concerned about surveillance"

You can represent the "Nothing to hide" argument as follows:

p → q

I will be providing a proof by counterexample. Suppose p is true, but q is false (i.e. "I have nothing to hide" and "I am concerned about surveillance"):

p ∧ ¬q

Someone may have nothing to hide, but still be concerned about the state of surveillance. Since that is a viable scenario, we can conclude that the "Nothing to hide" argument is invalid (a logical fallacy).

I know someone is going to try to rip that proof apart. If anyone is an editor on Wikipedia, please revert the edit that removed the "logical fallacy" text, as it provides a very easy and direct way for people to cite that the "Nothing to hide" argument is false.

Thanks for reading!

The 8232 Project

50 comments
  • Not that I disagree, but Wikipedia requires specific criteria for sources. I am not sure that a book about it being a logical fallacy meets that criteria any more than a book about parenting could be used to prove how to parent a child.

    Are there other Wikipedia pages that claim things to be logical fallacies that could be used to see what the burden of proof is for this claim?

    • Are there other Wikipedia pages that claim things to be logical fallacies that could be used to see what the burden of proof is for this claim?

      I'm not sure, but I found something interesting:

      One of Wikipedia's examples of an affirmative conclusion from a negative premise (a formal syllogistic fallacy) is as follows: We don't read that trash. People who read that trash don't appreciate real literature. Therefore, we appreciate real literature.

      The "Nothing to hide" argument can be written in a similar way: "I have nothing to hide. People who have something to hide are concerned about surveillance. Therefor, I should not be concerned about surveillance."

      • I think this is still not a citable claim. You link to the affirmative conclusion from a negative premise which includes that statement, but that page is explaining what that is. Your other page is using a claim to prove a different topic.

        The problem is that Wikipedia is not where you prove things. You need to cite somewhere else that proves it, and you need to do it in an impartial way.

        For example, saying that '"If you have nothing to hide you shouldn't fear surveillance from the state" is a logical fallacy' and citing the book makes Wikipedia have that stance.

        But in contrast, you could say that 'Critics argue that the argument "If you have nothing to hide you shouldn't fear surveillance from the state" is a logical fallacy" then cite the book, this way the critic is the one with the opinion and not Wikipedia.

        More citations of more critics would probably help too.

        I'm not an expert on Wikipedia by any means, but I do see why someone may have considered this statement not belonging on Wikipedia.

        Wikipedia has some info here: https://en.m.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view

        Also see the links at the top of that page about "Verifiability" and "No Original Research" as these are the three key things needed to allow the statement.

  • Here's a little experiment: next time you hear someone defends surveillance because they have nothing to hide ask them if you can have a look through their chat and browser history. Most likely they'll reply "that's private" and maybe after some time they will understand.

    That's of course by far not the only argument for privacy but it has a certain effect with most people

    • The issue that arises from this approach, as I've found, is that people have something to hide from you, but not the government/large corporations. When they feel as if they are in a pool, they feel less important compared to being singled out by you.

      You could instead do something similar: "Why does the FBI need to know what color of underwear you wear?" etc. to help them realize that surveillance goes much deeper than they realize, and not everything is relevant information.

      • yep. I use that argument sometimes , but it really depends on the person whether the "then give me your email/chat history/etc." argumeng will work.

        and just like you said, people don't you reading it. They wouldn't want to see you looking through their phone. But in the context of technology, it's very abstract. Like, when Instagram's chats weren't encrypted, telling someone "Instagram can read them" may sound vague. They don't imagine how like, Meta employees maybe could have access to it, or the softwares behind it that could analyze their chats. that could all be happening, but "out of sight, out of mind!" really helps people tolerate those possibilities. Thhe frontend, the chat interface, looks OK, so yeah.

        I don't think I could explain it very well, so my bad (not a native speaker), but yeah, I feel like because we are not encoruaged to think about how the software works behind the curtains, it's easy to assume that "well, I'm not a target, so why should I worry about doing this [privacy thing]?"

  • I usually respond to the phrase "I've nothing to hide" with the request to then provide me with the checking account number, the medical history, to let me read the email and chats, to tell me when you are going to be on vacation and the home. empty..... these are exactly the things that these pages collect to sell. We don't allow it in real life either, let alone online.

    Surveillance advertising is a crime, period.

  • The "Nothing to hide" argument isn't really an argument, it's more of a conclusion. That conclusion is then taken to support mass surveillance. It's also not a logical fallacy (even if it's wrong). It may be "proven" using logical fallacies, but that doesn't make it a logical fallacy on its own. So I think it's correct to remove the logical fallacy text.

    I think the more effective defense against this one is to provide counterexamples for why you might care about mass surveillance:

    • People do have something to hide. E.g. browser history, religious/political beliefs, etc...
    • You may not have something to hide now, but in the future you may wish it was still hidden. You can't unpublish information these days.
    • People you care about may have something to hide, and not caring about mass surveillance puts them at risk.
    • Relatively harmless individual datapoints can be combined to create harmful datasets that allow for mass exploitation.
    • Governments may abuse mass surveillance, whereby you may experience negative effects from journalists/political dissidents being silenced
    • Etc...
  • I would add to the conversation with the questions;

    Should all information be known? Just because something doesn't need to be hidden doesn't imply that it should be known broadly. It's not okay for somebody to know what color underwear I'm wearing right now.

    Is all information equal in value? Presuming one kind of data point is okay to be public does not mean that all data points are okay to be public. My address is public record (unfortunately) but that doesn't mean my social security number, ID number, and passport number should be public as well.

    • I would add to the conversation with the questions;

      I love fostering discussions, and I'm glad to see you do too!

      Should all information be known?

      Obviously nothing good can come if we all turn into the Borg. However, the question becomes more interesting when you consider different people. Should all of your information be known? No, obviously not. Should all of the CIA's information (besides the personal information of others) be known?

      Is all information equal in value?

      My previous counter question assumes that no, not all information is of equal value. However, even if the value of information differs, your ability to control if that information is shared should not be diminished.

      I had a boss that would remember everything you told to them, and would make incomplete assumptions about it. If you told them you liked dogs, they would assume dogs are your favorite animal. If you told them you got scratched by a cat, they would assume you absolutely hate cats. In this context, "mundane" information such as my personal preferences (favorite animal, etc.) is not something I would want to share with my boss, since nothing good comes from it. Even though the information has "less value," the value of it was raised depending on who I told.

      Your social security number is high value to say, your neighbor, but not necessarily the DMV. CIA documents may be high value against other countries, but it might be worth making it available to national citizens. So, information itself does not have a set value above any other piece of information, but it does have differing value depending on who you share it with.

  • Something like a joke about this:

    If you're working on something genius great, no matter who's watching you, they won't understand anything, but if you're doing ordinary things, it's really not that important that someone is watching.

    But of course, annoying comprehensive surveillance, especially from commercial companies, certainly should not be justified.

50 comments