There have been a string of court rulings around the world this past year that all point to a dangerous trend – courts attacking technology providers rather than those who use their technologies.
It’s hard to say where it started so I’ll just begin with the EU’s decision to force Google to remove links to certain stories in the controversial ‘right to be forgotten’ ruling. This put the onus on Google to field the requests, make a decision and then remove the links or not. The courts didn’t go after the sites that published the information, they went after Google.
Today Apple was forced to remove an app called Secret (a supposedly anonymous social messaging app) from its Brazilian App store. It’s not entirely clear if Brazilian law forbids the anonymous expression of ideas or whether the app was pulled because people were making slanderous statements about other people and the process of complaining about (and getting those statements removed) had to be done in English.
It doesn’t really matter which was true. The point is the courts held Apple accountable – not the people who were using the app. You could make the same argument about lots of technologies. You could write slanderous things about someone on a piece of paper and fax it to someone else, but the courts don’t go after fax machine manufacturers. If a newspaper prints a slanderous article about someone the courts don’t go after the printing press manufacturers.
Just about any communications technology can be used anonymously but just because some people might use it to break the law doesn’t mean you go after the technology that let them do it.
The big picture here is that unless companies like Google and Apple start pushing back against these rulings (and I mean pushing back hard) we will reach a state where any application or technology can be held legally accountable for the ways that people use them.
And that could very well lead to a situation where developers have to build mountains of contingency code into their applications. Code to disable certain features or even the entire app. Code that can or can’t be activated by the police or a government. Code to spy on users or not spy on users. Code that saves everything or code that forgets everything.
And all this code would have to be able to be implemented on a country-by-country basis just in case a court in Pakistan decides you can’t send selfies to someone in India, or a court in Turkey forbids any message containing the words ‘protest’ or ‘strike’ or ‘corruption’ (or that any app detecting those words in a message has to send the person’s personal information and GPS location back to the Turkish government).
Perhaps every developer should start thinking about all the possible changes they may or may not have to make to their applications after deployment and start building that contingency code in from the very beginning.