December 2005 Archives

Looking Back

| 1 Comment | 1 TrackBack

I can't say that 2005 was the best year ever, but going back through the archives, at least I wrote a few marginally interesting weblog posts.

The big story for the first two-thirds of the year was undoubtedly Grokster:


Brand X, the other major internet-related case to reach the Supreme Court this year, received a decent amount of coverage here, too:

The big story of the last third of the year was the legality of the Google Book Search program. Aside from Grokster, this was the story that generated the most links to commentary:

One of the two subjects that I am hope to put more thought into over the next year is information literacy and finding ways to evaluate and manage the usefulness and trustworthiness of internet resources (both legal and non-legal):

The other subject that had some interesting posts this year and I'm sure I will think more about next year is the future of entertainment:

Indecency was another subject that was well-covered this year:

I am particularly pleased with some posts about copyright legislation, in particular:

The most detailed post I don't remember writing was The broadest of the bands (Aug. 26).

Can a blogger avoid blogging about blogging and RSS? I think not:

Breaking out of the usual format, I tried a few different ways of posting. In November, I hosted Blawg Review #31. I liveblogged the Future of Music conference in September. experimented with podcasting and videoblogging:


Unfortunately, audio and video are more time-consuming to do well than text, but I hope to continue podcasting and videoblogging more frequently next year. I also hope to write less badly next year.

My favorite post titles for the year include:

Some other looks back on 2005 of interest:
Evan Brown, Internetcases.com: Ten intriguing Internet cases from 2005: "It's not a compilation of cases that are necessarily important to the overall development of this area of law (for example MGM v. Grokster is not on the list), but is merely a list of cases that have either off-the-wall facts or surprising/provocative outcomes."

David Pogue, The New York Times, 10 Greatest Gadget Ideas of the Year

JD Lasica, New Media Musings: Top 10 Tech Transformations of 2005: "1. The edges gain power. From the video and music worlds to politics and culture, power is increasingly flowing away from the media, from the political elites and from the corporate suits and into the hands of ordinary users who are collectively wielding more influence in all walks of life, mostly thanks to the Internet. The forces of freedom are steadily chipping away at the power of the forces of control. It's pure beauty."

And my favorite new blog of 2005? Undoubtedly Solove and company at Concurring Opinions.

Wikipedia and Authority

| 5 Comments

I initially posted this as a comment, following up to a comment by "Y456two" on Wikipedia Woes, but here it is as its own entry, because, well, it is rather long.

This portrays a fundamental misunderstanding of what Wikipedia is. Adding editors amounts to turning Wikipedia into the Encyclopedia Brittanica. Why would you want to do that? Don't we already have an Encyclopedia Brittanica?

No-- it represents the fundamental gap that separates what Wikipedia is from what it seeks to become. A user driven Wikipedia edited by panels of subject experts in various fields will be both more comprehensive AND more authoritative than a traditional encyclopedia.

If you believe that the masses are not smart enough to make their own judgements about the veracity of what they read, then, yes, absolutely, we should have a heavily regulated Internet, publishing industry, and media (sounds like China, don't it?)

I do think that heavy internet users and information professionals over-estimate the information literacy of the average internet user, but private editorial control on a private web site is a long way from state regulation. Why do we trust articles in the NY Times more than the Washington Times or the West Podunk Pioneer Press? A reputation for accuracy and veracity. Why would one prefer to buy from a seller on eBay with a +300 feedback rating than one with no feedback rating? A reputation for being an honest dealer.

What do we know about the authors of a wikipedia entry? Why is it authoritative? We only know that wikipedia as a whole is generally accurate. But because each article is written by a different group of authors, researchers do not have an easy way of figuring out which articles are accurate and which contain blatant falsehoods or smaller inaccuracies.

Adding a series of editorial boards comprised of acknowledged experts in various fields to monitor wikipedia entries will go a long way towards increasing the accuracy and trustworthiness of wikipedia as a whole. And it is possible to do this without becoming a Britannica clone-- in fact, doing so would take advantage of the same internet and collaborative technologies and processes that make wikipedia possible. It just happens to also acknowledge the fact (and, yes, it is a fact) that some people simply have more knowledge and experience in various subject matters than others. In the wikipedia model, these boards would not be simply appointed from the get-go, but could be composed of flexible memberships, with new members joining either by distinguished work in academia or business as well as by distinguished contributions to wikipedia itself.

At the very least, Wikipedia could post a list of the contributors who wrote or edited each article. This would make it possible for researchers to find out more about the authors of each individual article and make an educated decision whether to trust the accuracy of the wikipedia article.

I could say that the 'blogosphere' needs editors. I could claim that the problem with blogs is that there isn't some credentialed editor who controls what is posted.

Unlike Wikipedia, the "blogosphere" is not a single entity. Individual blogs have attributes that establish their reputation for accuracy and veracity. For example, you can read my biographical information and see that my posts carry less intellectual heft than those of Prof. Goldman, for example. Unlike the millions of individual blogs posted by named or pseudonymous authors, Wikipedia presents itself as a centralized authority and strips away many of the signs that make it possible for an individual researcher to decide whether a single article is reliable. We can't look to the author's biography. We can't judge the publisher's credibility, because this publisher will post anything. We can't look at the professionalism of the page design. The Wikipedia brand takes credibility from articles that justifiably grant credibility and it also lends credibility to articles that are not worthy of it.

The problem with Wikipedia is that it lends its brand to anyone. In the trademark context, a trademark owner who nakedly licenses a mark to anyone without keeping track of the quality of goods sold under that mark may lose the right to defend the mark. Since a trademark is meant to protect consumers and indicate the source of a good or service, nakedly licensing the mark strips away value from the mark. By allowing anyone and everybody to edit entries on wikipedia, wikipedia may squander any credibility it has attained.

As for Eric Goldman, I suppose he would be surprised to know that Usenet continues to thrive and be useful to millions of users every day.

I would challenge the idea that Usenet continues to thrive. I have yet to even load a Usenet news reader on my Powerbook, which means that I haven't delved into that thriving medium in at least nine months and haven't missed it a bit. People may still use newsgroups, but they have long since ceased to be relevant. How many average internet users can recognize that "alt.nerd.obsessive" denotes a newsgroup?

Here is the heart of the issue: do we trust people?

We trust people to the extent that the people have as full information as possible to make decisions. As another analogy, this is the driving principle behind securities law-- we have a policy bias towards requiring publicly traded corporations to disclose information-- because this allows investors to make informed decisions. The more that identifying information is witheld, the less reason we have to trust

Wikipedia Woes

| 3 Comments | 1 TrackBack

Wikipedia is one of the best sites on the internet-- volunteers compile information about esoteric topics and the entire compilation is a giant guide to the universe. The beauty of the site is that the internet community has created a vast encyclopedia without a single editor.

Nature compared Wikipedia and Encyclopedia Britannica and found that the upstart contains only slightly fewer errors: Internet encyclopaedias go head to head: "The exercise revealed numerous errors in both encyclopaedias, but among 42 entries tested, the difference in accuracy was not particularly great: the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three."

Wikipedia is becoming more frequently cited as a trusted source, despite potential for inaccuracies and often amateur writing and organization (just like this blog!) Evan Brown reports at InternetCases.com: Wikipedia and the courts: "lthough not everyone is convinced that Wikipedia can be trusted to always tell the truth, it is interesting to note that in the past year or so several courts, including more than one federal circuit court, have cited to it to fill in background facts relevant to cases before them. "

The problem with Wikipedia is that the internet community has created a vast encyclopedia without a single editor. Entries can contain factual inaccuracies or present topics in a skewed, biased manner. Wikipedia needs editors. Who chooses the experts for a particular field?

At the Volokh Conspiracy, Orin Kerr finds an interesting relationship between the level of general interest in a subject and the accuracy of that subject's Wikipedia entry: Checking in on Wikipedia's Patriot Act Entry:

I have found Wikipedia entries to be quite helpful when the topic is something esoteric. It seems that when fewer people care about a topic, the better the entry tends to be. When lots of people care about something, lots of people think they know something about it — or at least more people feel strongly enough that they want to get their 2 cents worth into the entry. When lots of people have strong opinions about a topic, even uninformed ones, the Wikipedia entry for that topic ends up being something like Tradesports betting odds on who Bush would pick to replace Justice O'Connor. It's an echo chamber for the common wisdom of the subset of people who use the site more than anything else. And if the views in the echo chamber happen to be way off, then so is the entry.

This suggests that the common wisdom may be entirely backwards. Instead of greater interest leading to greater accuracy, the more people who have a strong interest in a topic, the more likely it is that discredited or inaccurate theories will find their way into that topic's Wikipedia entry. Vocal critics of a widely accepted theory may be more likely than well-respected experts to spend time crafting the Wikipedia entry, so that the end result is that the Wikipedia entry is more likely to reflect the generally discredited minority view.

In an op-ed piece in USA Today, John Seigenthaler discussed A false Wikipedia 'biography': "I had heard for weeks from teachers, journalists and historians about "the wonderful world of Wikipedia," where millions of people worldwide visit daily for quick reference "facts," composed and posted by people with no special expertise or knowledge — and sometimes by people with malice."

Mike Godwin thinks that this problem is not limited to Wikipedia, but is endemic of the Internet as a whole: Wikilibel: "To me, the notable thing about this incident is that it seems to have given John and others doubts about Wikipedia in particular, when in fact the problems he sees are endemic to the Web and the Internet at large."

Unlike posting a random website on the internet at large containing the same defamatory text, posting the information at Wikipedia gives it credibility. The first place most internet users look to assess the credibility of a piece of information is the source. Because Wikipedia contains a growning number of thorough, accurate and well-written entries, Wikipedia as a whole is gaining a reputation as a trusted source for information. According to the Wikipedia entry about Wikipedia, "Articles in Wikipedia are regularly cited by both the mass media and academia, who generally praise it for its free distribution, editing, and diverse range of coverage." An incomplete, incorrect or defamatory article posted to Wikipedia gains from the authority of the accurate entries.

Eric Goldman believes that Wikipedia Will Fail Within 5 Years: "Wikipedia inevitably will be overtaken by the gamers and the marketers to the point where it will lose all credibility. There are so many examples of community-driven communication tools that ultimately were taken over—-USENET and the Open Directory Project are two that come top-of mind."

Unless Wikipedia starts to implement a strong editorial policy, the entire project will become suspect because of entries like the one about Siegenthaler. Wikipedia is at a critical point in that it has enough entries and reputation that by continuing to allow anyone to edit any entry may harm the future development of the project.

As with any controversial topic these days, some lawyers are already preparing a Wikipedia Class Action.

Hijacking RSS Feeds for Fun and Profit

Full-text RSS/Atom feeds are wonderful for information addicts. A newsreader brings new articles and posts in from around the web and makes it possible to skim through hundreds of sites very quickly. Well, the upper limit is probably around 200 where reading blogs is not one's full-time vocation.

From a publisher's perspective, full-text feeds cause problems. Such full-text feeds make it especially easy to enable copyright infringement.

Two of my favorite hockey blogs, Puck Update and James Mirtle, recently switched to publishing only abridged feeds after finding their posts providing the content for a third-party web site.

Merely publishing an RSS feed does not grant a license to republish the content on another site and republishing the full text of the content without permission is a prima facie example copyright infringement. But, what about when a service republishes the full text because a user subscribes to a full-text feed through a hosted service? What is the difference between a user-driven republication and one initiated by the republisher? Is it merely the commercial intentions of the republisher? Does it have to do with source identification and misattribution? The right of publicity?

Micro Persuasion's Steve Rubel is also a victim of Blog Content Theft: "This problem is only going to grow over time. Perhaps some digital watermarking technology needs to come into play here. Or, once again, Google needs to step in and shut down all Adsense sites that are deliberately spamming the blogosphere and bloggers. Anyone have other ideas?"

Daniel Solove suggests one way to deal with Blog Post Piracy:

There is, of course, copyright law. The creative commons license for Rubel's blog states that the work must be attributed to its author and it cannot be used for commercial purposes. The pirated post doesn't contain his name on the post or the name of his blog, but it does at least have a link to the original post on Rubel's blog. Is this sufficient enough attribution? As for commercial purposes, the blog copying Rubel's posts is displaying Google Ads.

What about hosting an RSS feed that republishes the content of another RSS feed? What if that RSS feed consists of pointers to audio files hosted on the original publisher's server? This is the situation with at least one podcast "service"-- it is publishing its own RSS feeds that link to a podcaster's audio. These feeds do not hold themselves out to be the publisher of the content, but by placing their feeds in podcast directories, these hijackers manage to control the connection between the podcaster and her subscribers. Colette Vogele discusses potential legal solutions: RSS Hijacking? A threat to podcast and blogging infrastructure?:

Since RSS and podcasting is new technology, there does not exist a handy “anti-RSS feed hijacking statute” out there on the books. There are, however, other possible claims that a lawyer can consider. For example, I’m brainstorming on a number of claims including unfair competition, trademark infringement/dilution, DMCA takedown options, computer fraud and abuse, tresspass, right of publicity, misappropriation, and the like.
Read the comments for additional technical methods of approaching this problem.

Cyberlaw Central's Kevin Thompson discusses: RSS Hijacking: Is it Copyright Infringement? "Alleging copyright infringement should work, at least for purposes of a cease and desist letter." Thompson goes on to note, "Interestingly, although the RSS feed itself could be copyrightable by itself if it contains sufficiently original material, this method of infringement doesn’t copy the RSS feed itself. The interloper’s site just links to the existing feed which remains intact at the podcaster’s site. The interloper just acts like any other subscriber to the feed, making it difficult to detect.

Finally, if you are going to repost content from my blog, all I ask is to properly attribute the author and to maintain the indications of quoted text-- don't make it appear that "Rafferty" wrote something that should be properly attributed to Easterbrook. Not that it should be too difficult for a careful reader to distinguish…

Previously: Syndication and Copyright: "What are the norms for using, repurposing and republishing syndicated feeds?"

Law 2.0

| 1 Comment

The Wired GC wonders when the law will migrate to Web 2.0: Web 2.0, Heading West to Law 2.0: "What is needed to make Law 2.0 applicable to legal research is for standards to emerge: how courts and agencies will preserve their work (html or pdf?), how they will announce it (RSS?), how they will categorize it (tags?), and how we will search it (guess who?)."

Let's take a look back, all the way to the year 2002 [music cue "In the year 2000 (and 2)"], where the geeky legal blogosphere was looking forward to courts publishing decisions in a standard, open XML format. This would allow greater public access to the law and make it possible for law firms and information specialists to create their own value-added databases: Free your data and the rest will follow. That post isn't particularly well-written, but it does hit on the tip of the iceberg of the potential for using the RSS, XML and the web to distribute court decisions.

"Alice" (nka BK) noted some of the advantages to having a central authority for caselaw: More Geek: "I predict there won't be any critical mass happening on that front in the courts anytime soon (and probably not within our lifetimes). People may have grown up using computers, but there are still many many people who don't understand anything but basic application use (and can't even take advantage of the advanced features of those applications). Computer knowledge needs to be driven to comparatively astronomically high levels before judges -- even those that grew up on computers -- will see the need for such a system, especially considering the time, expense, and potential problems with switching over, even if the implementation of the system is transparent."

As lawyers and judges become more web-savvy and enjoy using Google, they may wonder why they can't access caselaw using a search engine that is as fast and friendly.

Denise Howell looked at Trackback and imagined a scenario where "legal citators improve accuracy and stay in business because their editorial judgment continues to have value. Legal research nevertheless becomes more accessible and less costly. This probably won't happen any time soon, but it's not difficult to see how techniques being tested in the weblog arena now may shape the way research is done and laws are made down the road." Back Linking, Forward Looking

Are courts any closer to publishing decisions in an open format and using RSS/Atom (or a similar technology) to make it possible for more aggregators to create value-added services? Probably not.

The good news is, at least, that Westlaw added RSS feeds to its Intraclip service, which allows subscribers to create search watchlists. LexisNexis only offers feeds for some press releases and legal news, but not caselaw.

A different type of Law 2.0 is WikiLaw, which aims to be an open-content resource. I'm not entirely sold on the concept-- can such a resource every be considered an authority? Will it be gamed for litigation advantage by counsel?

Competitive Advantage?

Market researchers Ipsos-Reid found that Only 2% of Consumers Care About Legal Issues With Downloading Music: "Only 2% of people who paid a fee to download music from the Internet cited that the contentious legal issues surrounding online music distribution concerned them."

In other words, most people use legitimate services because those services are more convenient, easier to use, or offer better quality and features than illegal P2P.

HarperCollins Plans to Scan

The Wall St. Journal reports that HarperCollins will scan its books and allow search services to index those scans while itself controlling the full-text in digital form: HarperCollins Plans to Control Its Digital Books

Instead of sending copies of its books to various Internet companies for digitizing, as it does now, HarperCollins will create a digital file of books in its own digital warehouse. Search companies such as Google will then be allowed to create an index of each book's content so that when consumers do a search, they'll be pointed to a page view. However, that view will be hosted by a server in the HarperCollins digital warehouse. "The difference is that the digital files will be on our servers," said Brian Murray, group president of HarperCollins Publishers. "The search companies will be allowed to come, crawl our Web site, and create an index that they can take away, but not the image of the page."

First looks at BMG v. Gonzalez

William Patry: BMG v. Cecilia Gonzalez: "The opinion is significant in many respects. First, it established primary liability for those who download (at least under similar facts), an essential underpinning to all the previous (and presumably future) third party liability suits. The opinion then discusses what constitutes primary liability."

Joe Gratz takes a closer look at the procedural aspects of the damages portion of the decision: 7th Cir.: P2P Downloading Is Not Fair Use: "Gonzales, understandably, wanted to plead her case before a jury; tens of thousands of dollars in damages arising from a few dozen MP3s seems excessive to most people. But BMG was clever. They moved for summary judgment only with regard to the 30 MP3s that Gonzales admitted she downloaded and retained without owning CD copies, and only asked for the $750 minimum in statutory damages for each song. This left the jury with nothing to decide. She admitted she’d copied the songs, leaving only the question of damages, and BMG asked for the smallest damages the jury could lawfully award."

Eric Goldman: Downloading Music Isn't Fair Use--BMG v. Gonzalez: "This case deals with a central topic in P2P file-sharing lawsuits--was the downloading excused by fair use? This issue has come up in oblique ways in the past. For example, when the P2P file-sharing services were sued, they unsuccessfully claimed that their users' activities were fair use (e.g., Napster, Aimster). And warez traders (who engaged in large-scale uploading and downloading of copyrighted files) unsuccessfully claimed fair use (US v. Slater). However, we've had very few cases where the downloading defendant litigated his/her own fair use defense."

Cathy Kirkman, Silicon Valley Media Law Blog: 7th Circuit rules in P2P user infringement case: "While the outcome here is unsurprising as a matter of fair use analysis, the Court's characterization of the holding in Sony, while it may be technically correct, does seem like a rather narrow reading of the landmark Sony Betamax case. If the goal of some content owners is to limit the Sony Betamax case to its facts, they seem to have found a receptive audience in the 7th Circuit in that regard."

Evan Brown, InternetCases.com: Seventh Circuit rules in BMG v. Gonzalez: downloading music via P2P is not fair use: "The Seventh Circuit has affirmed a lower court's grant of summary judgment against a user of Kazaa, holding that the downloading of copyrighted music files is not fair use under the Copyright Act."

Kevin A. Thompson, Cyberlaw Central: BMG v. Gonzalez: 7th Circuit weighs in on fair use for filesharing: "The Seventh Circuit ruled yesterday in the case BMG Music v. Gonzalez, which involves a claim of fair use for songs downloaded from the peer to peer file sharing system, KaZaA. The district court had granted summary judgment to BMG, awarding $22,500 in statutory damages and an injunction against further infringement. Gonzalez then appealed to the Seventh Circuit."

Michael Madison: Easterbrook on Fair Use: "With Judge Easterbrook’s imprimatur, Gonzalez may turn out to be the ProCD v. Zeidenberg of copyright law: a case that takes a complex issue and treats it both reductively and persuasively. (Ironists take note: we already have a ProCD v. Zeidenberg of copyright law. It’s ProCD v. Zeidenberg.)"

Previously: Liability for P2P Downloading

Tasini to run for Senate in NY

Of mild interest to those interested in copyright law, Jonathan Tasini is running against Hilary Clinton for the Democratic nomination for Senator from NY. Newsday reports: Labor activist to challenge Hillary Clinton for Senate.

Tasini was the lead plaintiff in New York Times v. Tasini, 533 U.S. 483 (2001), where the Supreme Court ruled that a license to include a work in a newspaper or magazine does not grant the publisher the right to include that work in a database, if that database allows the work to be accessed independently of the work's original context in a newspaper or magazine compilation.

Liability for P2P Downloading

In the latest P2P file sharing case, the Seventh Circuit Court of Appeals upheld a summary judgment decision of the district court that downloading songs off of P2P services constitutes copyright infringement. BMG Music v. Gonzalez, No. 05-1314 (Dec. 9, 2005).

Writing for a unanimous panel, Judge Easterbrook rules that downloads off of P2P are not the same as time-shifting copies recorded off of television and should not be a fair use:

A copy downloaded, played, and retained on one’s hard drive for future use is a direct substitute for a purchased copy—and without the benefit of the license fee paid to the broadcaster. The premise of Betamax is that the broadcast was licensed for one transmission and thus one viewing. Betamax held that shifting the time of this single viewing is fair use. The files that Gonzalez obtained, by contrast, were posted in violation of copyright law; there was a copy downloaded, played, and retained on one’s hard drive for future use is a direct substitute for a purchased copy—and without the benefit of the license fee paid to the broadcaster. The premise of Betamax is that the broadcast was licensed for one transmission and thus one viewing. Betamax held that shifting the time of this single viewing is fair use. The files that Gonzalez obtained, by contrast, were posted in violation of copyright law; there was no license covering a single transmission or hearing—and, to repeat, Gonzalez kept the copies. Time-shifting by an authorized recipient this is not.

Easterbrook uses the "substitutionary use" test for fair use, which posits that a use is not fair use if it substitutes for the original work: "Music downloaded for free from the Internet is a close substitute for purchased music; many people are bound to keep the downloaded files without buying originals."

Copyright owners may recover damages not only for copies that directly replace sales, but for copies that harm the market for other licensed uses of the work:

Although BMG Music sought damages for only the 30
songs that Gonzalez concedes she has never purchased, all 1,000+ of her downloads violated the statute. All created copies of an entire work. All undermined the means by which authors seek to profit. Gonzalez proceeds as if the authors’ only interest were in selling compact discs containing collections of works. Not so; there is also a market in ways to introduce potential consumers to music. Think of radio. Authors and publishers collect royalties on the broadcast of recorded music, even though these broadcasts may boost sales.

(via How Appealing)

Another Google Book Search Panel

The Association of the Bar of the City of New York: "GoogleNet" and Fair Use: How the "Open Web" May (or may not) Threaten the Rights or Authors, Publishers, and Copyright Holders: "The panelists will discuss how copyright law should treat such content-scanning programs, the extent to which Google (or any other search engine) can or should truly be considered a “digital library,” what harm or unfairness such programs pose for authors and publishers, and, more broadly, who has, or should have, the right to control information contained in books."

Blogging Is the New Black

Law.com reports: Blogging Is the New Black and that lawyers are finding blogging useful: "And now, through a blog, lawyers can speed up the process of establishing their reputation as an expert. For instance, rather than an occasional appearance at a conference or seminar, attorneys can have an ongoing exchange of ideas regarding their particular expertise."

Google Miscellany

Everyone seems to be talking about Google these days. SiliconValley.com hosts a roundtable on Google and The Googleverse:

These days, you can hardly call Google a mere search and ad company. Its products and services are now ubiquitous and include news, blogs, e-mail, instant messages, voice, video, maps, library books, desktop accessories, photo editing and more. It is interested in promoting open document standards, building municipal Wi-Fi systems and analyzing NASA space data. And its next moves are the subject of constant speculation.

Eric Goldman attended and spoke at Yale's Regulating Search conference and posted a thorough write up of the conference: Yale Regulating Search? Conference Recap. Papers presented at the symposium are available on the conference web site.

Daniel Solove comments on how Google search histories are traceable to individuals: Google's Empire, Privacy, and Government Access to Personal Data: "No matter what Google's privacy policy says, the fact that it maintains information about people's search activity enables the government to gather that data, often with a mere subpoena, which provides virtually no protection to privacy -- and sometimes without even a subpoena"

In Newsweek, Eric Schmidt and Hal Varian discuss lessons at Google for managing smart employees in the information age: Google: Ten Golden Rules: "The ongoing debate about whether big corporations are mismanaging knowledge workers is one we take very seriously, because those who don't get it right will be gone. We've drawn on good ideas we've seen elsewhere and come up with a few of our own. What follows are seven key principles we use to make knowledge workers most effective."

The New York Times reports on Google's corporate culture: At Google, Cube Culture Has New Rules: "Google, like I.B.M., says that it is forging a corporate culture in which success depends on performance."

Hot Practice Areas

New York Lawyer's list of Hot Practice Areas for 2006 includes "intellectual property, mergers and acquisitions, private equity and litigation."

Claria pops-up some good press

Claria gets some good press from Wired Magazine: Don't Call It Spyware

Today Gator, now called Claria, is a rising star. The lawsuits have been settled - with negligible impact on the company's business - and Claria serves ads for names like JPMorgan Chase, Sony, and Yahoo! The Wall Street Journal praises the company for "making strides in revamping itself." Earlier this year, The New York Times reported that Microsoft came close to acquiring Claria. Google acknowledges Claria's technology in recent patent applications. Best of all, government agencies and watchdog groups have given their blessing to the company's latest product: software that watches everything users do online and transmits their surfing histories to Claria, which uses the data to determine which ads to show them.

Spyware researcher Ben Edelman finds that Claria's latest installers are not particularly user-friendly: What Claria Doesn't Disclose (Any More): "Claria's ordinary installations still fail to tell users what users reasonably need to know in order to make an informed choice. In particular, Claria's current installations omit prominent mention of the word "pop-up" -- the key word users need to read in order to understand what Claria is offering, and to decide whether to agree."

If Claria's software does not accurately disclose the nature and frequency of pop-ups and denies internet users of the opportunity to accept with informed consent, should it have "the blessing" of "government agencies and watchdog groups"?

A quick guide to info literacy

Lifehacker's Wendy Boswell posted a useful basic guide to information literacy for web searching: Seek and Ye Shall Find: How To Evaluate Sources on the Web: "Believe it or not, the Web does not always contain accurate information. In fact, every once in a while, you might come across something that (gasp!) is not true. Well, that’s to be expected, really - the Web is made by people, and people aren’t perfect, and people make up a LOT of coo-coo-crazy stuff."

Previously: Information Literacy and the Law, Information Literacy.

Here are the most interesting articles and podcasts about the Google Book Search and copyright law that have been published and posted since my last collection of links.

In Wired magazine, Lawrence Lessig discusses: Google's Tough Call: "Google wants to index content. Never in the history of copyright law would anyone have thought that you needed permission from a publisher to index a book's content. Imagine if a library needed consent to create a card catalog. But Google indexes by "copying." And since 1909, US copyright law has given copyright holders the exclusive right to control copies of their works. "Bingo!" say the content owners."

At the University of Chicago Faculty blog, Douglas Lichtman writes: Lessig, Google Print, and Movies: "How should we decide when a copyright holder is entitled to earn revenue from a new technology? Consider, for example, movies. If I make a movie based on your book, and my movie hurts sales of your book, I take it that it is easy to agree that I should have to share some of my movie revenue with you. The new technology in that case displaced sales of the old one, and the law likely should help to dampen that blow, in this case by requiring movie producers to license the work. But what if it were the case that movie sales did not at all diminish book sales?"

Brad DeLong puts an economist's spin on the debate: GooglePrint: "I tend to put on my right-wing public-choice hat here, and side with GooglePrint. The private beneficiaries from assigning too much of the value of innovation to the dead hand of old property rights are concentrated. The private beneficiaries of assigning too little of the value are diffuse. In a public-choice world ruled by lobbyists, there will be strong pressures on legislation and law to overprotect existing property."

In the Boston Globe, David Weinberger examines one of the potential advantages of having book content digitized: Crunching the Metadata: What Google Print really tells us about the future of books: "Despite the present focus on who owns the digitized content of books, the more critical battle for readers will be over how we manage the information about that content-information that's known technically as metadata."

At the MassLawBlog, Lee Gesmer discusses Google And The Digitization of The Planet’s Books: "I suspect that what’s really keeping the publishers and authors up nights is this question: who’s going to have control over this compilation of data? Sure, it’s “Do No Evil”-Google today, but who might have the resources to do the same thing, even on a smaller scale, in the future? And remember, the future is a long time. One can imagine the great-to-the-nth descendants of today’s publishers cursing their literary ancestors for allowing Google to take the first step down the slippery slope that leads, who knows where?"

Edward Wyatt covered the debated at the NYPL for the NY Times: Googling Literature: The Debate Goes Public: "If there was any point of agreement between publishers, authors and Google in a debate Thursday night over the giant Web company's program to digitize the collections of major libraries and allow users to search them online, it seemed to be this: Information does not necessarily want to be free."

Lawrence Lessig offers his take from the NYPL debate: the “discussion”: the morning after: "The AAP and AG say they believe in “fair use.” If that’s so, then they must believe that someone has a right to make money using fairly the work of others. If that’s so, then they must believe that someone has the right to fairly use the work of others without permission. And so if that’s so, then if Google Book Search is fair use. not only is Google doing nothing wrong. Google is, from the perspective of the authors and publishers, doing something extra nice — giving them the permission to opt out of the index."

The Progress and Freedom Foundation hosted another debate about Google Book Search: Gutenberg Meets Google. C-SPAN covered the debate and has video of the event.

PFF's Adam Theirer discusses some of the points raised there, in particular the law and economics evaluation: Google Print and Transaction Cost-Based Analysis for Fair Use Law: "I'm not going to go into all the issues at stake in this debate, but I did find it interesting the panel of legal experts speaking at the event spent so much time focusing on transaction costs, something we usually only hear about when the panel consists of a bunch of economists."

Evan Brown's InternetCases.com Podcast for November 29, 2005 is the audio from a panel discussion at the John Marshall School of Law: "Professor Doris Long moderated the discussion. The first panelist to speak was professor Leslie Reis, who addressed various business issues pertaining to the Google Book Search model. Todd Flaming, a practicing attorney and adjunct professor at John Marshall spoke next on the technology behind the project. [Evan Brown] spoke next on the legal issues in the cases filed by the Authors Guild and the American Association of Publishers, focusing mainly on the fair use factors of copyright law… Professor David Sorkin compared the nature of indexing pages in Google Book Search with the process of indexing regular web pages. The final speaker was Tom Keefe, a reference librarian at the John Marshall Law School library, who gave a librarian's perspective on how Google Book Search could affect the future of research."

Rebecca Tushnet reports on a talk at George Mason University's Center for History and New Media about Google Book Search and similar projects: Massive Digitization Projects: "The speakers, Clifford Lynch (Executive Director, Coalition for Networked Information) and attorney Jonathan Band, were engaging and informative; the audience seemed to be library-oriented rather than lawyer-packed."

Even if Google prevails in the lawsuits with publishers and authors in the US, European law does not have the same concept of fair use as American law: Google digitisation faces Euro legal challenge: "The American "fair use" law, which Google has used as a justification for its scanning of in-copyright material from libraries in America, is, Morris said, broader than its European equivalent, "fair dealing". Google is currently embroiled in lawsuits in the US with both the Authors Guild and the Association of American Publishers over its actions."

Previously: Google Print and Fair Use, Google Print at the Public Library Publishers Sue Google, Too, Google, Publishers, Copies and "Being Evil".

Is preferential still open?

Telecom and internet infrastructure providers seek changes in the way the internet is regulated to be able to charge for preferential treatment and greater access. The Washington Post reports: Executive Wants to Charge for Web Speed: "William L. Smith, chief technology officer for Atlanta-based BellSouth Corp., told reporters and analysts that an Internet service provider such as his firm should be able, for example, to charge Yahoo Inc. for the opportunity to have its search site load faster than that of Google Inc."

In Linux Journal, Doc Searls discusses why open access, rather than preferential access, makes the internet the success that it is today, and how end-users and creators can protect this principle: Saving the Net: How to Keep the Carriers from Flushing the Net Down the Tubes:

They see a problem with freeloaders. On the tall end of the power curve, those 'loaders are AOL, Google, Microsoft, Yahoo and other large sources of the container cargo we call "content". Out on the long tail, the freeloaders are you and me. The big 'loaders have been getting a free ride for too long and are going to need to pay. The Information Highway isn't the freaking interstate. It's a system of private roads that needs to start charging tolls. As for the small 'loaders, it hardly matters that they're a boundless source of invention, innovation, vitality and new business. To the carriers, we're all still just "consumers". And we always will be.

Information Addiction

Via Kevin Heller, today's NYT style section features an article discussing whether internet addiction is a serious problem: Hooked on the Web: Help Is on the Way: "These specialists estimate that 6 percent to 10 percent of the approximately 189 million Internet users in this country have a dependency that can be as destructive as alcoholism and drug addiction, and they are rushing to treat it. Yet some in the field remain skeptical that heavy use of the Internet qualifies as a legitimate addiction, and one academic expert called it a fad illness."

I don't know about internet addiction, but I'm pretty sure that I am an information addict. The internet and RSS/Atom, in particular, make it easy to get connected to new information all the time. This really isn't much of a problem unless it keeps you from getting necessary things done.

And I think that reading loads of articles and posts for this blog is doing that-- it does enable me to feel like I'm doing something productive and useful, while not actually doing anything productive, like finding a job. Of course, the correlary is that the job market for recent law school grads outside of Biglaw (and public-sector biglaw, like the DA) is very small. And the Biglaw hiring is generally done through OCI and summer programs, not after graduation. Most of the interesting, available attorney positions are for lawyers with 3-5 years of work experience. For many other positions, a JD is overqualified. It's not that positions aren't out there, but they seem few and far between, so I am frustrated.

The blog does at least keep me in touch with the law and current events. A small number of the posts are actually not bad…

The Sixth Circuit ruled that the shape of a guitar alone is not enough of a similarity to create a situation of initial interest confusion. Gibson Guitar Corp. v. Paul Reed Smith Guitar, LP, No. 04-5836, No. 04-5837 (6th Cir. September 12, 2005). Consumer products will tend to appear like their competitors at a sufficient distance, and initial interest confusion can not substitute for "point-of-sale confusion" in this context.

Open Source Licenses

The NY Times reports on the Free Software Foundation's plans to revise the General Public License (GPL), which is the license that governs the use of Linux and many other open source programs: Overhaul of Linux License Could Have Broad Impact.

Lawrence Lessig discusses the problems caused by mixing works that use different free licenses: On Compatibility: "All of these free licenses, as well as the current versions of all Creative Commons licenses, share a common flaw. Like the world of computing in the 1970's, or like the world of content that DRM will produce, these licenses wrap creative work in ways that makes that creativity incompatible."

In LLRX, Dennis Kennedy offers Best Legal Practices for Open Source Software:
Ten Tips For Managing Legal Risks for Businesses Using Open Source Software
: "The Open Source licenses represent a very different approach to licensing than most businesses, and their lawyers and legal departments, have become accustomed to in the commercial software setting. Research on the Open Source licenses will often turn up conflicting interpretations, misinformation, philosophical arguments and diametrically opposed points of view. This result should not surprise you, especially if you have researched the commentary on the changing Microsoft software license policies where you will see much of the same. There is good reason, in both cases. Important issues are at stake and a casual approach can result in significant consequences."