Remote storage DVR concept resurfaces with Cablevision

When is a DVR not a DVR? How about when there’s no DVR? If Cablevision has their way, customers will enjoy the benefits of a DVR without the DVR, at least in the sense that we’ve come to expect. The plan is so simple, you’ll swear you’ve heard it before: instead of recording shows on a box in your home, why not let the cable company do it for you? Enter "remote storage DVRs."HangZhou Night Net

RS-DVR for short, a remote storage DVR is like a TiVo that stores all of its data at the cable office. As a kind of glorified digital set-top box, the RS-DVR communicates over high-speed cable networks much in the way that on-demand services do, but there’s a twist: you set the shows to record, and you manage your recordings. You also don’t pay per show. Cablevision plans to provision 80GB of storage space for users who sign up for the service, billed for less than US$9.99/month—their current DVR fee. Since users only need a digital set-top box to make it work, Cablevision hopes that it can sign up a significant portion of the customer base who is already equipped to take advantage of the service. To make the plan sound even sweeter, Cablevision says that they expect such a service to save them money and hassle dealing with DVR repairs.

If you do think that you’ve heard all of this before, it’s because you have. In 2003 AOL/Time Warner planned a similar service dubbed Mystro, but canceled it amid fears that the entertainment industry would come down hard on them. Partners grew cold to the idea for the reasons we’ve come to expect: fears over how it would affect advertising revenues; fears over how widespread DVR usage would affect DVD sales; fears over muddying the waters between broadcaster and cable provider. Cablevision will likely see the same kinds of pressures, but the company hopes that the now-established marketplace for DVR technology will protect them from unreasonable objections.

"Consumers have well-established rights to ‘time-shift’ television programming by making copies for personal, in-home viewing," the company says. "This new technology merely enables consumers to exercise their time-shifting rights in the same manner as with traditional DVRs, but at less cost."

Just who is the broadcaster?

While certain proposals such as the broadcast flag threaten certain aspects of DVR usage, the citizen’s right to timeshift—recording a broadcast show to watch later—is currently safe and sound. But Cablevision’s service, just like AOL’s failed Mystro, blurs the picture. Copyright owners begin to twitch when they hear about cable companies centrally storing recorded shows and playing them back to users, all without paying royalties and/or licensing rights. In the minds of the copyright holders, this is on-demand priced to make the cable companies rich off of the backs of Hollywood.

Cablevision should be safe. For now, the saving distinction comes from the fact that Joe User has to instruct his DVR to record shows. This exposes two key points that we think will feature centrally in the way that RS-DVR efforts will avoid the sharp fangs of the entertainment industry. First, RS-DVR won’t allow users to tap into a centralized storage system to get at shows that they didn’t record. At the very least, this is a good faith effort to stay away from TV’s syndication and DVD efforts. Second, the storage capacity will have to be kept modest; consumers can’t be permitted to record practically everything on TV. Limited storage and limited recording capabilities (two shows at once) will keep the service from becoming a true threat to the entertainment industry. Maybe.

But as networks speeds get faster, DVR gets more complex, and the entertainment industry hones its skills on selling content online, look for this issue to get political. In the meantime, Cablevision’s plans will provide for a fantastic opportunity to look at DVR uptake sans DVR. Could this be the thing that finally makes the DVR concept ubiquitous?

Skype hit with RICO suit

VoIP company Skype is the target of an interesting lawsuit filed in the US District Court for the Central District of California. Filed by Streamcast, makers of the once-popular Morpheus file-sharing software, the RICO (Racketeer Influenced and Corrupt Organizations Act) lawsuit centers around the FastTrack peer-to-peer technology that powered both Morpheus and Kazaa. HangZhou Night Net

RICO is most often associated with organized crime, but companies are sometimes the targets of RICO actions. Most notably, the RIAA was hit with a RICO lawsuit last October by a woman who claims she was mistakenly accused of file-sharing by the recording industry group.

Let’s set the way-back machine to early 2002. The original Napster had been dead for a couple years and arising to fill the P2P void were a pair of applications: Morpheus and Kazaa. Both of them were built on the same FastTrack technology. FastTrack was the brainchild of Niklas Zennström, Janus Friis, and a handful of others. Morpheus and Kazaa had coexisted peacefully in the P2P world until February 2002, when Morpheus users were suddenly barred from the FastTrack file-sharing network, reportedly due to its not paying licensing fees.

The history of FastTrack since then has been a convoluted one. Kazaa and its underlying FastTrack technology were spun off to Sharman Networks, a company incorporated in the Pacific island nation of Vanuatu. Since that time, Sharman has been embroiled in all manner of lawsuits, including one that resulted in a ruling that said Sharman had encouraged Kazaa users to violate copyright laws and ordering the company to foot 90 percent of the plaintiffs’ legal bills.

So what does all of this have to do with Skype? After washing their hands of Kazaa and starting Altnet, a "legitimate" P2P business, Zennström and Friis started Skype. Getting its start as a VoIP service that allowed members to make free PC-to-PC calls and paid calls from PCs to telephones, Skype was snapped up by eBay last fall for a cool US$2.6 billion.

Streamcast says that the technology used in Skype is the same FastTrack technology that was used in Morpheus and Kazaa. More importantly, the company claims that it had the right of first refusal to purchase the technology before it was redirected to Sharman Networks. Streamcast is also alleging that the FastTrack technology has remained under the control of Zennström and Friis the entire time.

It remains to be seen whether there is any basis for Streamcast’s lawsuit. With the infusion of cash from the eBay takeover, Skype certainly makes a tempting target. eBay is not a party to the lawsuit, as all the alleged chicanery took place prior to the deal last year, but the case makes me wonder if the online auction house wishes it had done a little more thorough job with due-diligence before consumating the transaction.

Podcasts don’t cast into the Pod

A study done by Bridge Data suggests what many of us already suspected: Podcasting is popular, but it has little to do with the "pods," that is, the iPods and other portable players for which "Podcasting" was supposedly born. The study concluded that 80 percent of podcasts are either listened to and/or watched on a PC, or simply deleted. HangZhou Night Net

For some, this begs the question: what is a podcast? TDG Research is apparently puzzled by this quandary: isn’t a podcast, by definition, supposed to be played on a portable device? I have another question: is this really a dilemma?

"Push" content is nothing new, and podcasts are fine examples of push content. Regardless of where a podcast is played, the mechanism for delivery has become the essence of podcasting. Automatic updating/delivery, usually fueled by either RSS or some helper application such as iTunes, is what drives it.

How people consume the content they download shouldn’t be important on any definitional level. In fact, I think that the explosion of online content sales, timeshifting, and more recently, placeshifting has shown us that pinning down how people consume content is misguided. At least, I think it’s misguided when we’re talking about defining a new medium, if that’s what "podcasting" is.

If podcasting is indeed a new medium, it’s a small one. According to Bridge Research, only 9 million people will listen to podcasts this year. To put that into perspective, women’s living portal iVillage.com reportedly has about 35 million readers. In short, podcasting is still emerging, which is another way of saying that the medium could change drastically over the course of the next year, as more people subscribe to podcasts, both commercial and free. And like a giant black hole, the PC is becoming an entertainment device (for more than just gamers) by exerting its gravity on all things digital, including these podcasts. iPods and other portable media players are mere accessories.

Over the next year, podcasting may truly take off, but not necessarily on account of its grassroots feel. At around two years old (depending on who you ask), podcasting is still extremely new, but it’s showing the maturity of something much older. Already we have both audio and video podcasts, and commercial offerings are on the rise. Unlike MP3, which lived for years online with no "commercial" component, podcasting is being commercialized as we speak. I expect many players, big and small, to dip their toes into the commercial podcasting businesses, either by booking advertising spots into their podcasts (many, many already do this), or by charging for their content outright. Regardless, the era of push content is upon us again, but this time the name is "cooler," even if it’s not entirely clear what the name means to everyone.

Oxygen metabolism and the rise of the modern world

The latest issue of Science contains an article and perspective on the use of oxygen in metabolic pathways, with details of the article available to non-subscribers. Which is a shame, because the perspective is much more interesting than the article, and I can only summarize it. As many of you probably know, life started out in a reducing atmosphere, utilizing anaerobic metabolism. Significant amounts of free oxygen didn't exist on earth until the advent of photosynthesis, over a billion years following the start of life. This came after the three main branches of life had already split, inheriting a single anaerobic metabolic system from their common ancestor. HangZhou Night Net

Photosynthesis undoubtedly evolved because it provided an energy source, but it did have the side effect of producing oxygen. Although oxygen is technically just a waste product of photosynthesis, its arrival must have been quite a shock, as it was effectively toxic to the existing cellular metabolisms. Organisms came under immediate selective pressure to protect their components from being oxidized. At the same time, oxygen presented an opportunity: metabolizing with oxygen provided much more energy than could be generated by anaerobic pathways. As the perspective suggests, the organisms alive at the time were either pushed into anaerobic niches, or did, "the biological equivalent of grabbing the most valuable possessions (including pictures of your ancestors) when your house is on fire, so that you may be able to start life anew after the catastrophe."

Existing enzymes were re-purposed to protect the cells' components from oxygen, existing pathways were expanded to include oxygen, and entirely new pathways were evolved. The study provides a glimpse into the results of this by using computers to randomly sample about 100,000 metabolic networks and determine their size and dependency upon oxygen. The authors discovered that oxygen has become the most prevalent chemical in metabolism, outpacing even the universal energy source, ATP. Oxygen utilizing pathways have also outpaced the anaerobic ones in complexity, with far more proteins involved. The variation of anaerobic systems roughly follows the pattern of the tree of life, as would be expected from inheritance of a single system through common descent. Extensive speciation had already occurred by the time cells needed to handle oxygen though, so many different solutions to this problem occurred throughout the tree of life, and the variation shows little overall pattern.

One thing that is suggested by the pattern of aerobic pathways, however, is that the eukaryotes, which include all multicellular organisms, were the big beneficiaries from the appearance of oxygen. Only about 20 percent of the anaerobic pathways are unique to eukaryotes, while about half of the oxygen-based ones are. Somewhat suspiciously, the perspective notes that multicelluar organisms arose within the eukaryote lineage at roughly the same time as the start of photosynthesis. It seems like the added energy efficiency, coming at a time of new selective pressures, may have pushed life into multicellularity.

Meanwhile, the New Scientist covers a more controversial proposal related to the onset of photosynthesis. The current chemical weathering of rock requires carbon dioxide, which may not have been present in significant amounts prior to photosynthesis. Researchers are proposing that the CO2-based weathering is necessary for the formation of granite, a type of rock that appears to be unique to earth, although that may just be because we haven't looked in the right places on other planets. Regardless, granite is a low density rock that is responsible for the light crust that supports the continents. Putting this together, the proposal suggests that the onset of photosynthesis created the right conditions for the formation of the current, continent-based tectonic system. Future research may tell whether photosynthesis shaped not only life, but the world which it came to inhabit.

A tale of two Apples… and three lawsuits

It was the best of times, it was the worst of times. Apple Computer, a company sharing a moniker with the Apple recording label, being sued by that label’s parent company, not once but thrice. The first two suits were settled for very different amounts, but the third suit goes to the British High Court this week, and could turn out to be the most damaging of all.HangZhou Night Net

The first court case probably seemed a bit frivolous in 1989: a record label devoted primarily to issuing recordings made by a band that had ceased to exist a decade earlier suing a computer company, for little more than a similarity in names. Yet the band behind the Apple Corps record label was the Beatles, arguably the most important rock act ever, and the company named in the law suit was Apple Computer, then in its influential heyday as a computer manufacturer.

The results of the suit didn’t seem particularly significant at the time. Reportedly, Apple Corps accepted a settlement of around US$80,000 from the computer builder, and extracted a seemingly inconsequential agreement that Apple Computer would stay out of the music business.

Twenty-five years later, that agreement looms large, as Apple Computer is now known as the company behind the iPod, iTunes, and even more importantly to Apple Corps, the iTunes Music Store. With overall downloads recently topping a billion songs, iTMS is nothing if not part of the music industry, and Apple Computer has become one of the major players in the biz.

In September 2003, Apple Corps again filed suit against Apple Computer, this time for entering the music business in violation of their previous agreement. On the surface, it certainly looks as though Apple Computer could end up on the hook for a potentially very large and damaging settlement.

The defense revolves around the second lawsuit, which was filed in 1989 by the music label against the computer maker. At that time, the suit involved the Macintosh’s growing ability to produce music. In 1991, this case resulted in a much larger cash settlement (reports vary from US$26 million to US$50 million) and a new set of agreements in which Apple Computer would stay out of distribution of music media. Data transfers were allowed under the agreement, and music as data as opposed to physical media is likely to form the crux of Apple’s argument.

It is important to note that, at the time, there was no such thing as online transfer of music. While compact discs were common and digital music had existed for years, storing sound as a file was a fairly esoteric concept, the Internet was in an embryonic state, and data transfer was based on 300 baud modems. In short, only the most daring futurist might have properly predicted where the music business would be headed just ten years later. The concept of transferring music as digital data was little more than science fiction.

So Apple Computer finds itself on the receiving end again. This time, Apple Corps looks like it has a powerful case, as it has already twice received settlements for far less direct infringement on its business interests by the computer maker. Nevertheless, with so much at stake, expect Apple Computer to fight long and hard to prove that iTMS file transfers are perfectly acceptable under the 1991 agreement.

Microsoft opens public bug database for IE 7 and beyond

Microsoft has announced the launch of a new site dedicated to public feedback for Internet Explorer 7 and future versions. The Internet Explorer Feedback site is open to anybody who wishes to report issues and log bugs, or just leave comments or feature suggestions for future versions. This sort of feedback is new for Internet Explorer, and the web site explains the reasons for the change:HangZhou Night Net

“Many customers have asked us about having a better way to enter IE bugs. It is asked ‘Why don’t you have Bugzilla like Firefox or other groups do?’ We haven’t always had a good answer except it is something that the IE team has never done before. After much discussion on the team, we’ve decided that people are right and that we should have a public way for people to give us feedback or make product suggestions. We wanted to build a system that is searchable and can benefit from the active community that IE has here.”

The site is run using the Microsoft Connect feedback engine, used by both internal and public Microsoft beta programs. Signing in requires a Passport account—which most people will recognize as their account info for Hotmail and MSN Messenger, but is also used at Microsoft for things such as MSDN subscriptions. Once you’ve entered your Passport info, the site requires that the user fills in some personal details, then there is a there is a long legal disclaimer screen, and finally the user is presented with a list of possible projects to “apply” for, one of them being the Internet Explorer 7 beta.

The site is organized fairly simply, with a search bar for finding existing bugs and a list of the “top” bugs in the product at the moment, as well as a list of related articles for the product. Each bug is given a rating, validation by other testers, and a comments section. While there are still relatively few people actively using the site (most bugs have around 30 to 40 votes and fewer than ten comments) the activity is expected to rise as the product nears completion. The site is cleaner and somewhat easier to navigate than Mozilla’s Bugzilla page, although it is a much more laborious process to get access (reading bugs requires no login at all on Mozilla’s site).

So how useful are public bug databases in creating a good quality software product? Opinion varies wildly on the issue. Many open-source projects use these databases for generating feedback (many of these projects also use Bugzilla), but some people have commented about the difficulties encountered when using public forums for large-scale projects. Useful feedback can often be lost under a morass of poorly written, emotionally charged posts. Companies with professional testers report that the general public will typically find only the most obvious bugs, and often fail to accurately describe the procedures for recreating them.

Still, for a product such as a web browser that is expected to work properly with all the millions of strange and wonderful web pages out in the wild, a place for public feedback may uncover problems that in-house testers failed to find. As well, the existence of such a site presents a more open face of the IE development process, which may be helpful in reversing the public’s perception of Internet Explorer as a stagnant product.

Global warming and hurricanes

Shortly after hurricane Katrina struck some
people started to blame particular hurricane events on global
warming. Most climate
scientists do not support such claims since climate models provide a window
into average climate conditions that will, in general, change the rate and
intensity of extreme events, rather than making predictions about specific
events. What we do know is that climate
models provide a strong link between greenhouse gases and warmer ocean surface
temperatures. It is also well understood
that hurricanes and tropical storms flourish over warm waters.HangZhou Night Net

Several studies have shown that hurricane frequency and intensity has been steadily increasing for the last 25 years. Scientists from Georgia Tech * have attempted to link this with various climate variables. They examined ocean surface temperatures, humidity and wind conditions using satellite data collected from 1970 to 2004. Analyzing data of this sort is extremely complex since it is an intrinsically multi-variable system. However, using statistical methods derived from information theory they were able to show that ocean temperature was the strongest predictor of hurricane frequency and intensity. At this point someone will usually point out that correlation != causation, however, the mechanisms for generating hurricanes are fairly well understood, especially in terms of the role of surface ocean temperature. In this case it is fair to say that the link they have discovered really only confirms what we already knew.

Climate modeling and our understanding of physical and chemical atmospheric processes provide a strong link between greenhouse gases and ocean surface temperatures. Thus it appears that some of the doom-saying is in fact coming true and that there are more serious consequences than a rising sea level associated with global warming.

*=press report, I could not read the ScienceExpress article

The slow approach of the self-driving car

I know a number of people for whom the coolest part of Minority Report was the highway full of futuristic, self-driving cars. (For my part, I was more into the jet packs, but whatever.) The 2005 DARPA Grand Challenge, in which a computer-controlled VW Touareg successfully navigated 132 miles of desert, was the first step in such a driverless future, and a new EET feature article outlines many of the intervening steps that it will take to bring fully autonomous vehicles to a highway near you. HangZhou Night Net

The plan appears to be progressive automation, in which computers slowly take charge of different parts of the driving experience over the course of the next two or three decades. Right now, computers adaptively tweak things like steering and suspension, and before long they’ll be slamming on the brakes for you when the in-car radar detects an imminent, unavoidable collision. Eventually, a car’s computer and sensor array will allow you to place the vehicle on autopilot so that you can read the morning paper during the commute to work.

At any rate, you’ll want to check out the article, which talks to a number of experts in the field to get a sense of how and on what timetable self-driving vehicles will arrive.

Braking factors: liability, safety, cost

As the article mentions in passing, redundancy will be critical. I think it’s unlikely that we’ll see military-style triple-redundancy, where each critical system is backed up by two other completely different implementations that were designed in isolation from each other, but I wouldn’t be surprised to see something pretty close. The main reason is liability.

If I’m cruising at 75mph in the 2019 Honda WireDrive SUV and one crucial part of the car’s complex nervous system of electronics fails, causes me to veer into the oncoming lane, and sets off a 20-car pile-up, all of the passengers in all 20 of the cars will sue one Honda—not me—for fifty hojillion yuan (that’s about US$500 million in today’s currency, accounting for inflation and for the fact that in 2019 we’ll all be using the currency of our Chinese economic overlords).

Automakers will want to be 100 percent certain that when they take over for themselves the accident liability risk from the driver, their systems are not to blame for any property damage or loss of life. And in this respect, I think liability and safety issues, and not raw technical challenges, will probably be the main factor determining the pacing of the rollout of these autonomous driving technologies. When the first fully autonomous cars hit the road, it probably won’t be because completely autonomous driving has just now become technically possible, but because automakers were finally able to provide the necessary redundancy at a low enough cost.

Speaking of redundancy, I also expect that autonomous vehicles’ internal networks of sensors and processors will expand to include nearby cars in a kind of ad-hoc wireless mesh. Your car will be get a heads-up when the vehicle three cars ahead spots an obstacle in the lane, and it’ll take appropriate action. So when you pull onto the Interstate, you’ll also join a giant, roving, ad-hoc mesh network that includes hundreds of other cars, as well as government traffic systems, emergency and police vehicles, etc. And who knows—maybe the people in the car next to you will be up for a pick-up game of DNF deathmatch over the inter-vehicular network.

Mobile phone exam cheating on the rise in England

According to a new study from England’s Qualifications and Curriculum Authority (QCA), the number of students in the country who are using mobile phones to cheat on exams is rising fast:HangZhou Night Net

“Over recent years we have seen a noticeable rise in the number of mobile-phone related incidents in examination halls across the country,” said QCA Chief Executive Ken Boston.

The report found that over 4,500 students were penalized for cheating during the last round of A-level (pre-university exams) and GCSE (high school) tests, up 27 percent over last year. Of these incidents, candidates caught with mobile phones accounted for nearly a quarter of the offences.

Because of the rise of mobile phone cheating during exams, students are instructed not to bring them into exam rooms, and advised to leave them at home if possible. Students can currently be docked marks or even failed for simply having a mobile phone during exams, whether they use them to cheat or not.

The good news is that the overall number of students who are penalized remains low, with less than one incident for every 1,500 exams written. Other offenses included plagiarism, disruptive behavior, failing to follow the invigilator’s instructions, and cheating using more traditional methods.

Cheating has been a problem for examiners as long as there have been exams, but does the rise of wireless technology present a special problem for education? Traditional mobile phones would not be much use for in-exam cheating, but being able to text or SMS your friend who wrote the same exam yesterday (or last year) would be a much more discreet method of cheating. However, educators can adjust in much the same way as they did to the cheating possibilities provided by programmable calculators: by simply not allowing them to be used.

Is the rise of mobile phone cheating indicative of a larger societal problem? Already there are some concerns about the fact that today’s generation of gadget-obsessed kids may sacrifice concentration and accuracy to the holy grail of multitasking. However, the low percentages of cheating seem to indicate that the traditional examination is not under an immediate threat. What will happen to the education system when students get Google feeds directly implanted into their visual cortex is, of course, another question.

Two new Internet phone services step up to the plate

If you’re someone who loves to use the Internet to connect your voice to someone else’s ears, two more options have become available. As with so many things online, the mature business model for voice over IP (VoIP) has yet to really solidify, and we currently seem to be in a state of affairs in which, if a service doesn’t yet exist that offers what you want, you can wait five minutes and one will come along.HangZhou Night Net

With that in mind, both Jajah and Lycos Phone offer slightly different takes (and business models) on how to use those dandy little IP packets to magically scoot your voice around the world. Here’s a quick breakdown of what they have to offer:

Jajah

Jajah began life last summer as an Australian-based PC-to-phone service. It was cheap, and the users that migrated to the company liked that aspect, but it never achieved much popularity. It didn’t take long for the founders of Jajah to realize that there’s only so far a company can go in a price war before competition from better-funded rivals drives it out of business. With that in mind, they decided to redesign their service to make it simple for a user with any phone to make VoIP calls with little more than a link to a web page. Because the interface is web-based, anyone with a browser can access it.

The Jajah paradigm works like this: a user with an account logs into the Jajah web site and enters the phone number he or she would like to call, along with the phone number of the phone (landline or mobile) they’d like to use. Jajah then dials the user’s phone. Once connected, Jajah dials the remote phone. When the remote phone is answered, Jajah connects the two phones using VoIP.

As an introductory offer, Jajah is currently allowing US users to call a number almost anywhere in the world for up to five minutes for free with no registration. I gave it a try, and it seemed to work well.

Lycos Phone

Lycos is perhaps best known as the search engine that isn’t Google or Yahoo or AOL or MSN. It is also not Alta Vista, Excite, or any of another dozen also-rans. Lycos had its greatest success in the pre-Google age, and would like nothing more than to find something it does well enough to attract a decent quantity of users back to the service, or lacking that, uncover a way of packaging all of its portal offerings to turn it into a convenient one-stop for ‘Net surfers.

To that end, the company has unveiled Lycos Phone. Lycos Phone works only through a computer, and requires a client application and some type of speaker/microphone combination. Currently, only a Windows version of Lycos Phone is available, but support for other operating systems is planned for the future.

True to its portal roots, the Lycos Phone application ties in streaming video, MP3s, search features, ads (which can be viewed to earn free minutes), faxes, and even video calls. The service also provides the user with a phone number which can be used to receive calls from non-VoIP phones.

Bundling aside, Lycos Phones still falls short of a service like Skype in a few areas. For one thing, Lycos keeps you tethered to the PC, whereas Skype provides the fee-based option of forwarding your call to any non-VoIP phone. Skype also provides conference call capability, and Skype already offers Mac, Linux, and Pocket PC support.

Both of these services offer somewhat different methods of saving money by bypassing the regular phone companies’ connection charges in favor of VoIP. In so doing, they join a host of competitors like Vonage, Skype, and others offering similar services. Of the two, Lycos has a bit too much bundling going on for my taste. Jajah is cleaner and simpler, but both suffer from the fact that it’s hard to surpass the ease of use of a regular phone. The cost savings is nice, but even Jajah appears to be most useful as a long-distance alternative. You might as well make local calls the regular way. As always, your mileage may vary, and if you don’t like any of your VoIP options, wait five more minutes.

Powered by WordPress. Design: Supermodne.