Privacy In A World Of Growing Technology

Whilst the Internet and the technological advancements are just growing and gather momentum to invariable possibilities. We should actually be aware of the impact of such technology admonitions in our daily lives. Can we actually live without them? Do we understand the real challenges behind being connected and having our lives digitally imprinted and available to us in years and centuries along?

My question still stands, do we value our privacy and the protection of our personal sphere of life. We can hear people saying that they value some control over who knows what about them. In essence, we certainly do not want our personal information to be accessible to just anyone at any time.

However, we need to admit that recent advances in information technology threaten privacy and have reduced the amount of control over personal data and open up the possibility of a range of negative consequences as a result of access to personal data.

Big Data and advanced Technology allows for the storage and processing of exabytes of data. The revelations of Ed Snowden demonstrated that these worries are real and that the technical capabilities to collect, store and search large quantities of data concerning telephone conversations, internet searches and electronic payment are now in place and are routinely used by government agencies.

For business, personal data about customers and potential customers are now also a key asset as it gives them a leverage do grow and create more business prospects. At the same time, the meaning and value of privacy remains the subject of considerable controversy.

The combination of increasing power of new technology and the declining clarity and agreement on privacy give rise to problems concerning law, policy and ethics.

Let us agree the challenge is as true and challenging as it stands without any other further rethinking. We are hooked to the technology and cannot give away the way we are stuck to the tech. today. In the old days our wallet was perhaps the first thing we picked up but today it seems, you could love without it since the Cell phone is first picked up. Can anyone really live without the cell phones and all the technology that surrounds us all the way to office and or home.

Privacy is intertwined with the use of technology. The privacy debate has evolved with the rapid and uncontrollable development of technology. It is difficult to conceive the notions of privacy and discussions about data protection as separate from the way computers, the Internet, mobile computing and the many applications of these basic technologies have evolved.

 

The interest of individuals in exercising control over access to information about themselves and is most often referred to as “informational privacy”. For instance, think about information disclosed on Facebook or other social media. All too easily, such information might be beyond the control of the individual.

 

In reality we need to understand that, privacy can be either descriptive or normative, depending on whether they are used to describe the way people define situations and conditions of privacy and the way they value them, or are used to indicate that there ought to be constraints on the use of information or information processing. Informational privacy in a normative sense refers typically to a non-absolute moral right of persons to have direct or indirect control over access to information about oneself, situations in which others could acquire information about oneself, and technology that can be used to generate, process or disseminate information about oneself.

 

Privacy are almost always revolving around new technology, ranging from genetics and the extensive study of bio-markers, brain imaging, drones, wearable sensors and sensor networks, social media, smart phones, closed circuit television, to government cybersecurity programs, direct marketing, RFID tags, Big Data, head-mounted displays and search engines. There are basically two reactions to the flood of new technology and its impact on personal information and privacy: the first reaction, held by many people in IT industry and in R&D, is that we have zero privacy in the digital age and that there is no way we can protect it, so we should get used to the new world and get over it. The other reaction is that our privacy is more important than ever and that we can and we must attempt to protect it.

Personal information or data is information or data that is linked or can be linked to individual persons. Examples include date of birth, sexual preference, whereabouts, religion, but also the IP address of your computer or metadata pertaining to these kinds of information. Personal data can be contrasted with data that is considered sensitive, valuable or important for other reasons, such as secret recipes, financial data, or military intelligence. Data that is used to secure other information, such as passwords, are not considered here. Although such security measures may contribute to privacy, their protection is only instrumental to the protection of other information, and the quality of such security measures is therefore out of the scope of our considerations here.

If the legal definition of personal data is interpreted referentially, much of the data about persons would be unprotected; that is the processing of this data would not be constrained on moral grounds related to privacy or personal sphere of life.

Whereas, technology is typically seen as the cause of privacy problems, there are also several ways in which technology can help to solve these problems. There are rules, guidelines or best practices that can be used for designing privacy-preserving systems. Such possibilities range from ethically-informed design methodologies to using encryption to protect personal information from unauthorized use.

 

Rules and Principles give high-level guidance for designing privacy-preserving systems, but this does not mean that if these methodologies are followed the resulting IT system will (automatically) be privacy friendly and secured or secure by design. Some design principles are rather vague and abstract. What does it mean to make a transparent design or to design for proportionality? The principles need to be interpreted and placed in a context when designing a specific system. But different people interpret the principles differently, which lead to different design choices, some of which will be clearly better than others.

 

There is also a difference between the design and the implementation of a computer system. During the implementation phase software bugs are introduced, some of which can be exploited to break the system and extract private information. In addition, implementation is another phase wherein choices and interpretations are made: system designs can be implemented in infinitely many ways. Moreover, it is very hard to verify—for anything beyond non-trivial systems—whether an implementation meets its design/specification. This is even more difficult for non-functional requirements such as ‘being privacy preserving’ or security properties in general.

 

In Tor, messages are encrypted and routed along numerous different computers, thereby obscuring the original sender of the message (and thus providing anonymity). Similarly, in Freenet content is stored in encrypted form from all users of the system. Since users themselves do not have the necessary decryption keys, they do not know what kind of content is stored, by the system, on their own computer. This provides plausible deniability and privacy. The system can at any time retrieve the encrypted content and send it to different Freenet users.

Privacy enhancing technologies also have their downsides. For example, Tor, the tool that allows anonymized communication and browsing over the Internet, is susceptible to an attack whereby, under certain circumstances, the anonymity of the user is no longer guaranteed. Freenet (and other tools) have similar problems. Note that for such attacks to work, an attacker needs to have access to large resources that in practice are only realistic for intelligence agencies of countries.

However, there are other risks. Configuring such software tools correctly is difficult for the average user, and when the tools are not correctly configured anonymity of the user is no longer guaranteed. And there is always the risk that the computer on which the privacy-preserving software runs is infected by a Trojan horse (or other digital pest) that monitors all communication and knows the identity of the user.

 

Modern cryptographic techniques are essential in any IT system that needs to store (and thus protect) personal data. However, that by itself cryptography does not provide any protection against data breaching; only when applied correctly in a specific context does it become a ‘fence’ around personal data.

 

The use and management of user’s online identifiers are crucial in the current Internet and social networks. Online reputations become more and more important, both for users and for companies. In the era of ‘Big Data’ correct information about users has an increasing monetary value.

‘Single sign on’ frameworks, provided by independent third parties (OpenID) but also by large companies such as Facebook, Microsoft and Google, make it easy for users to connect to numerous online services using a single online identity. These online identities are usually directly linked to the real world (off line) identities of individuals; indeed Facebook, Google and others require this form of log on. Requiring a direct link between online and ‘real world’ identities is problematic from a privacy perspective, because they allow profiling of users. Not all users will realize how large the amount of data is that companies gather in this manner, or how easy it is to build a detailed profile of users. Profiling becomes even easier if the profile information is combined with other techniques such as implicit authentication via cookies and tracking cookies.

From a privacy perspective a better solution would be the use of attribute-based authentication which allows access of online services based on the attributes of users, for example their friends, nationality, age etc. Depending on the attributes used, they might still be traced back to specific individuals. In addition, users can no longer be tracked to different services because they can use different attributes to access different services which makes it difficult to trace online identities over multiple transactions, thus providing some level of privacy per say that does not linkup the user.

Technology thus does not only influence privacy by changing the accessibility of information, but also by changing the privacy norms themselves. For example, social networking sites invite users to share more information than they otherwise might. This “oversharing” becomes accepted practice within certain groups. With future and emerging technologies, such influences can also be expected and therefore they ought to be considered when trying to mitigate effects.

The other fundamental question is whether, given the future (and even current) level of informational connectivity, it is feasible to protect privacy by trying to hide information from parties who may use it in undesirable ways. It may be more feasible to protect privacy by transparency—by requiring actors to justify decisions made about individuals, thus insisting that decisions are not based on illegitimate information. This approach comes with its own problems, as it might be hard to prove that the wrong information was used for a decision. Still, it may well happen that citizens, in turn, start data collection on those who collect data about them, e.g., governments. Such “counter surveillance” or under surveillance may be used to gather information about the use of information, thereby improving accountability. The open source movement may also contribute to transparency of data processing.

Apart from general debates about the desirable and undesirable features of the precautionary principle, challenges to it, lie in its translation to social effects and social sustainability, as well as to its application to consequences induced by intentional actions of agents. Whereas the occurrence of natural threats or accidents is probabilistic in nature, those who are interested in improper use of information behave strategically, requiring a different approach to risk (i.e., security as opposed to safety). In addition, proponents of precaution will need to balance it with other important principles, viz., of informed consent and autonomy.

Finally, it is important to note that not all social effects of technology concern privacy. They may, include the effects of social network sites on friendship, and the verifiability of results of electronic elections. Therefore, value-sensitive design approaches and impact assessments of information technology should not focus on privacy only, since technology affects many other values as well. So, are we human beings or have we really become so robotic in our ways. We have already started being so….. Your active location for friends but it also opens a potential to be tracked as well. So the real question is, do you know what you have as a digital footprint. What is out there that you have not noticed you give away?  In all reality Privacy starts with yourself and don’t expect others to do so for you. If you are the King who walks naked and think that you have a marvelous and very desirable clothing on you. Then a Kid may just tell you that you are naked. Get yourself to understand, make people aware and understand what you post and what pics you put up there are all important identifiers.

 

 

 

N.B: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official  position of the African Academic Network on Internet Policy.

 

Related Posts