EU leaders clash with Google over the meaning of 'personal data'

As the June 2007 opinion from the Article 29 Working Group goes on, there are many technical reasons why an IP address can't identify its user, citing a borrowed computer in an Internet cafe as an example. But since the ISP for that computer probably doesn't know right off-hand that it's in an Internet cafe, if a court requisitioned the data for the personal user of that computer, it would do so anyway. Therefore, the group concluded, "unless the Internet Service Provider is in a position to distinguish with absolute certainty that the data correspond to users that cannot be identified, it will have to treat all IP information as personal data, to be on the safe side."
In other words, since an IP address doesn't provide enough information about itself to state whether it can or cannot be used to identify a user, since courts will likely treat IP addresses as though they could do so, they should be treated as though they could.
It's worth noting here that the EU's official account of the meeting mis-defined the concept of the IP address, calling it "a 32-bit numeric address that serves as an identifier for each computer, perhaps confusing it with a MAC address but also neglecting to acknowledge the existence of IPv6.
According to the AP's account of yesterday's meeting, Google's Fleischer responded to these arguments by stating that Google uses IP addresses to discern what country a user is operating in, and to tailor its search results to that country of origin. "If someone taps in 'football' you get different results in London than in New York," Fleischer said.
But he also added that Google uses those addresses later in conducting traffic pattern research, in such a way that individuals' privacy is not infringed, he argued.
"Non-aggregated data" is the final category on the Art. 29 list, and questions could arise as to whether Google's research qualifies as requiring legal protection, should the group's recommendations be adopted. The problem, Art. 29 wrote, is whether each datum used in the research sampling can somehow -- never mind how difficult it might be to do so -- be traced back to an individual source. "If the codes used are unique for each specific person," Art. 29 wrote, "the risk of identification occurs whenever it is possible to get access to the key used for the encryption. Therefore the risks of an external hack, the likelihood that someone within the sender's organization -- despite his professional secrecy -- would provide the key and the feasibility of indirect identification are factors to be taken into account to determine whether the persons can be identified taking into account all the means likely reasonably to be used by the controller or any other person, and therefore whether information should be considered as 'personal data.' If they are, the data protection rules will apply."
So with the broadening of the EU's data protection rules becoming a genuine possibility, and with opposition to the matter appearing to be limited to American corporations and almost indifferent US trade representatives, the question for Google and others becomes even murkier: Will it become illegal for any company that happens to be successful at its business to keep personally identifiable data in an online accessible location for longer than a year?