Feed on
Posts
Comments

It has been reported that Telefonica will shut down Tu Me and redirect its resources in shoring up another service, To Go. People have theorized that compared to competing services from OTTs, To Me has anemic traction with uncertain revenue potential. On the other hand, the reasoning continues, To Go has solid revenue opportunities, since it accrues billable minutes/SMS from existing customer base. Two years back, I gave a talk at Telecom 2018 Workshop, in which I argued that telcos will have difficult time directly competing with OTTs and suggested an alternate approach. In this post, I revisit those points in the context of Telefonica’s decision.

We have to recognize that Telcos and OTTs are fundamentally different. OTTs are funded by risk loving VCs. They are designed to take big risks with a quick entry and just as quick an exit. They go for world domination and design their services for viral adoption.

Telcos are a study in contrast. They are established enterprises beholden to shareholders who value steady return and adverse to big risks. Furthermore, they need to be worried about cross elasticity of new services with old ones. They also have strong presence in geographically restricted areas; usually federate with other telcos in out-of-regions. But such federation is not easy to come by since potential partners may have different priorities in introducing new and speculative services. So on Day 1, a new service will have low Network effect.

It is clear that To Me experience exactly these issues and predictably they had low traction. Though I do not have verified data, it is a safe bet that they were more successful in their local regions more so than out-of-regions. Since they are marketing To Go to their existing customers, they will have better luck with that service. It allows their subscribers to access the services using multiple means of access. This way, they have become an “OTT” for their subscribers. But it is only half the solution.

If we take the perspective of friends of Telefonica’s subscribers, we will notice the missing piece. They also use multiple technologies to access the network, but in the current scheme, it all have to come via PSTN with attendant restrictive set of features due to federation agreements with their carriers. This need not be the case anymore. Supposing Telefonica allows non-subscribers to reach its network using WebRTC technology, then its customers can use new services and features with no loss of Network effect.

This is the fundamental benefit of WebRTC from the perspective of the carriers: it frees them to introduce new services and features to their subscribers without loss of network effect and without relying on federating and coordinating with other carriers.

In a recent post Chris Kranky on the need “to move on” and the need for expediency in wrapping up the first iteration of the API. Personally I would have benefited if the first iteration had been a low level spec. For I could have easily ported a custom Java applet. But given the passage of time, it is more important that there is an agreed standard. But this point is not the objective of this post. Instead I would like to focus on another of his points:

[WebRTC] wasn’t designed to be federated (namely that 2 WebRTC applications aren’t in fact supposed to talk to each other.

He makes this observation to explain the motivation for seeking low level control. My quibble is not with this explanation, but I want to take this sentence in isolation, interpret it literally and discuss it. (It is not fair to Chris, but I am just using his sentence as a prop. So it should be OK with him.)

In my interpretation, if WebRTC is not designed to be federated, then there is some deficiency and need to be addressed. If not immediately, but at some future time. But with WebRTC construct there is no need for federation. Let me explain.

Following are four main reasons why we need federation and how WebRTC handles them without requiring federation:

  1. Reachability information is not widely held, except for some selected nodes in both the systems.
    • Since WebRC address is HTTP URI, the originator’s service provider or system is not needed. The originator can directly access the destination’s system. Indeed, it is not required that the originator be part of any service provider or system.
  2. Communication between the systems may need mediation to handle incompatibilities.
    • Since the app server dynamically downloads the signaling procedures, there are no incomptibility issues on the signaling plane. I further assume that MTI codecs remove incompatibility between the browsers. In any event, any such incompatibility can be solved w/o the two systems federating.
  3. Identification and authentication of external nodes need to be mediated.
    • Since the whole construct is built on HTTP, any of the third party verification systems can be used to identify and authenticate the end-points. In this respect there is a need for federation, which is much less stringent requirement and can be easily waived by the end points depending on the use case.
  4. Since the external systems may not be trustworthy, the local system need to protect its users.
    • WebRTC has builtin security systems to protect the end nodes from malware apps. Specifically, the browser ensures that a rogue app can not assume control of the end node.

In my opinion the fact that WebRTC does away with federation is one of the important benefits and why it is going to disrupt communications industry.

For many use cases the point of this post may not be applicable. But for the use case where your contact can use a browser to reach you to communicate with you, it is very relevant. So you decide whether to read on further.

If you want your contacts be able to reach you from their WebRTC-enabled browser, you need to provide them with an HTTP URI. Just like we share our telephone number. Or email address. Or better yet, the URL of our blogs. Usually the URI will identify the location where the WebRTC app is running and also your id. For example the URI to reach me at Twelephone is

http://twelephone.com/aswath

It can be as simple as this or a bit more complicated. I have a WebRTC app running on a self-hosted server. My URI there is

http://enthinnai.dyndns-home.com:8080/enthinnai/pages/iframelogin.jsp?from=iframe&owner=aswath.mocaedu.com

You may not want to share such an unwieldy URI. Since a WebRTC session starts with an HTTP message exchange, we can use redirection service and use a more memorable URI as WebRTC “number”. For example, I am using

http://bit.ly/callaswath

as my URI to reach my self-hosted WebRTC app server.

We may want to do this even if it is short like the one issued by Twelephone. Unlike phone numbers, you can not port the URI from one provider to another. But HTTP redirection is a simple and straightforward solution.

The URI need not be a static one. A website can dynamically construct a URI to indicate additional, context specific information. For example, the URI can indicate the ID of the caller, which can be verified using a HTTP-based verification system. For example, this URI,

http://enthinnai.dyndns-home.com:8080/enthinnai/pages/iframelogin.jsp?from=iframe&owner=aswath.mocaedu.com&
buddy=www.enthinnai.com/unauopenid/anycard

indicates the caller’s id to be www.enthinnai.com/unauopenid/anycard. Or the caller can indicate the purpose of the call in the URI itself. Or the URI can indicate the page from whence the call is initiated.

All these supplementary information is usually defined by the signaling protocol. Since the signaling protocol is unspecified, the application can decide which information will be sent and the specific format. Specifically, the app can decide to carry some of the information as part of the URI.

By now it is quite passe to claim that WebRTC will be a huge disruptive technology. Indeed, there has been a predictable backlash. In all these back and forth, very often we miss to note an important aspect of this technology: there has been a deliberate attempt to avoid specifying any messages and procedures that go across the wire to an intermediate point like server. This gives enormous flexibility to app developer in designing a signaling procedure that suits the needs of the app and at the same time do not have to worry about interoperability issues between arbitrary peers and the app. This is almost true, except for NAT/FW traversal. The objective of this post is to suggest a way to overcome this as well.

The recommended procedure for NAT/FW traversal is to use ICE, which in turn uses two servers – STUN and TURN. More importantly, ICE specifies the procedure and message format that these servers have to follow. Of course Google makes available a STUN server and free, open source TURN server implementations are available. But if for some reason an app wants to avoid these external dependencies, then the app developer has to develop them and then have to test compatibility with browsers. This takes away one of the main benefits of WebRTC.

Instead of fully developing STUN and TURN servers, the idea is to develop a simple “Twice NAT” and a clever use of Trickle ICE procedure that the browsers already support to bootstrap ICE procedure. Here, let us recall that Twice NAT maps both the origination and destination addresses, instead of just the origination (destination, resp) address on the outgoing (incoming, resp.) flow.

As part of Peer Connection procedure, Peer A will generate an SDP offer containing its host address. The app server can append to this SDP offer an address as a “fake” server reflexive address of Peer A before forwarding it to Peer B. In response to this, Peer B will generate an SDP accept containing its host address. The app server can append to this SDP accept another address as a “fake” server reflexive address of Peer B before forwarding it to Peer A. As part of ICE connectivity check procedure, Peers A and B will send connectivity check messages to these “fake” server reflexive addresses. From these the app server can deduce the real server reflexive addresses of the peers. Also, the app server can allocate two addresses at the Twice NAT as relay addresses. With these addresses at hand the app server can generate SDP offer messages to the two peers containing server reflexive and relay addresses of the other peer. Of course the peers will respond with accept messages that the app server can ignore. Since the peers have new candidates, they will perform connectivity checks on these new set of addresses.

If a peer is multihomed, then that peer will conduct connectivity check from each of the interfaces to the “fake” server reflexive address, yielding true server reflexive address of each of those interfaces.

Thus, the app server facilitates NAT/FW traversal without developing conforming STUN and TURN servers and placing the burden of compatibility solely on the peers.

This procedure is adopted from a modified procedure we were using in the Java-based RTC system in EnThinnai. It was developed during 2008.

Last week during WebRTC Expo we saw enormous activity regarding WebRTC. Even though people list many use cases, most of the demos and announcements were related to Unified Communications (UC). They were all good, but in a way disappointing because they didn’t take full advantage of rearchitecture afforded by WebRTC. Most widely held mindset seems to be to continue with the current architecture with browser-based clients making peripheral change in the clients. I would like to take comments made by Andy Abramson and Vincent Perrin as a springboard to expound my view that we have to reorient our thinking.

In a blog post Andy observes that WebRTC has the potential to “kill off” softphone business, if WebRTC apps/services fix a couple of things. He makes the following observation:

… for the most part softphones are not easy to work with, set up or manage unless you’re an IT guy. … More importantly, the services all need to do a better job with identity and management of multiple accounts. … like GrandCentral [WebRTC apps] need to give you one single sign on and manage many different identities, all in one place. That way, a customer with multiple accounts can manage their online communications life in one place. … [No one is doing] Single sign on, multiple accounts. Right now, it’s all one username, one account, one browser window. Sorry, but I for one don’t want multiple windows with multiple accounts on the same service to be running.

While commenting on an unrelated blog post, Vincent observes that

you won’t be able to be called except if you are in the right page at the right moment.

Both of the comments to a certain extent reveals a widely, if wrongly held view regarding the roles of clients and apps. In this post I would like to describe how I view the architecture and why these issues are not there under this architecture.

  1. I do not consider a web page served by an WebRTC app replaces softclient. Instead, I consider the full browser (not a single app) taken as a whole replaces softclient.
  2. The browser supports Notification API and WebRTC apps will use it to notify the user to notify events like an incoming session initiation request.
  3. WebRTC apps will use third party authentication mechanism to authenticate users. After all the standard requires this mechanism when the end users want to authenticate each other over the Peer Connection.
  4. A user’s address book is nothing more than a bookmark folder in the browser containing names of the contacts and their corresponding WebRTC URIs.
  5. The workflow for initiating a session is for the originating user to visit the the other user’s “WebRTC URI”, which will point to the app server among other things. At that time, the app server will authenticate the originator before proceeding further.

Given these, it is straightforward to see that none of the concerns expressed by Andy and Vincent are valid.

Since the apps are using Notification API to indicate incoming session request, there is no need for the user to be in the “right page at the right moment”. It is enough that the apps are properly registered so they can notify the user’s browser. Given that the browser can be concurrently “logged into” multiple id providers with the user selecting the relevant id as needed and that the apps use 3rd party authentication mechanism, we can easily meet Andy’s requirement that there be a “single sign on, multiple accounts”.

Some of us strongly believe that WebRTC will usher in a wide variety of innovative services, features and capabilities. At the same time, there are many skeptics dampen the (irrational?) exuberance. I am sure both sides will present their view points during this week’s WebRTC Conference & Expo. In this post, I would like to commemorate that conference by outlining one possible application.

As a background, we are all familiar with emergency telephone service. You know the one where you dial 911? Or is it 112? Or is it 999? On top of that, you may have to dial a different number depending on the nature of emergency – one number for Police, another Medical and yet another for Fire. How will a roaming mobile user to know which number to use?

Then there are occasions when one would like to reach the local police for non-emergency assistance, like a fender-bender. But one may not know the contact information. Indeed, I do not know the phone number of my local police station, let alone a location I am just passing through.

Finally, many communities have non-emergency community information service, sometimes called 3-1-1 service, based on the dial code used in US. Other countries have similar services, but use different access numbers.

I propose an application that can be used in these scenarios.

A user who would like to contact Police or a government agency sends an HTTP request to the app provider. That request will contain needed information like the nature of query, location of the user (as derived from the device) and other incidental information. Then the app provider can use these to locate the specific agency that has jurisdiction and can redirect the HTTP request to that agency. From there, the agency and the user can communicate using the services of WebRTC.

There are advantages in using this scheme. A roaming user does not have to know how to reach the local agency. If the request is for medical emergency, the request can carry the location (URL?) from where medial data can be retrieved. Of course this requires authentication and authorization processes, which can be easily done using multiple redirection of HTTP requests.

It should be noted that the basic requirement is that the app provider have a universal database of emergency and other government agencies for any given location. This may not be such an onerous task. For example, SeeClickFix does it for 311 in many communities.

NaDa and EnThinnai

A couple of days back, New York Times had a story on a recent research paper that was presented at Usenix Workshop on Hot Topics in Cloud Computing. The idea is to spin a cloud using servers placed inside the homes and use the heat generated by these servers to heat the homes. In the paper, the authors point out an earlier study that suggested the use of home routers as Nano Data Centers (NaDa) for content caching.

As stated in the NaDa paper: “The key idea behind NaDa is to create a distributed service platform based on tiny managed “servers” located at the edges of the network. In NaDa, both the nano servers and access bandwidth to those servers are controlled and managed by a single entity (typically an ISP).” It goes on to suggest that, “Significant opportunities already exist for hosting such tiny servers on ISP owned devices like Triple-Play gateways and DSL/cable modems that sit behind standard broadband accesses. Such gateways form the core of the NaDa platform and, in theory, can host many of the Internet services currently hosted in the data centers.” This has been the exact guiding philosophy as we developed EnThinnai where the candidate service is Social Sharing that provides consumer-friendly alternative to public social networks.

I think Social Sharing service based on NaDa is a better alternative than the content caching and distribution service that explored in the paper. Users may perceive that Content Cachnig and Distribution service really benefits the ISP and so may be reluctant to share their resources to offer service to others. Additionally, these gateways and modems require storage capability that may not be available readily. Social Sharing service on the other hand is directly beneficial to the hosting user and they will be willing to supply storage devices to store their content. More importantly, users will be assured that their content is at all times in their possession and privacy is assured. ISPs will be able to position this in positive light compare to privacy issues that plague public social networks.

In an article that was published in July 2010, Bruce Schneier categorizes Social Networking Data into six groups based on (broadly speaking) who generated the data, about whom, what data is that. He further states that each category will have different editorial rights and we will have different access rights in each category. It will be interesting to see how EnThinnai fares against this categorization. You can make your own comparison to Facebook and Google+. But my comparison says that with EnThinnai you are the master of your data.

Category

Storage location

Access rights *

Editorial rights

Service data Own server Self Self alone
Disclosed data Own server Self Self alone
Entrusted data 3rd party server 3rd party Self and 3rd party
Incidental data 3rd party server 3rd party 3rd party alone
Behavioral data Not applicable N/A N/A
Derived data Not applicable N/A N/A
* Access rights can further be extended by this person.

As part of his “5 Myths of Social Software”, Jon Mell dispels a myth that one needs “lots of people for social tools to be a success.” He points to this famous diagram by Chris Rasmussen Chris Rasmussen wiki-email and his own positive personal experience at a three person startup to conclude that “placing social tools in the context of their existing workflows (like email) and targeting identified business problems (even if they initially involve small groups) is far more successful than trying to get large numbers of young people using Facebook-like tools for the sake of it.”

This is a very critical point, especially since “Network Effect” is often erroneously invoked to suggest that a large social network, ipsofacto, is very critical for its success. But at the same time, social tools should facilitate innovators and early adopters to evangelize to the rest of the organization. Many tools do not allow for this. Take the case of Google Wave. In my opinion it is a great social software offering many features and capabilities. But my colleagues couldn’t be part of a single wave without committing to it fully. They can not wade into it – they have to fully submerge. It would have been nice if Wave allowed me to invite a colleague into a wave and experience it. To illustrate this point further consider the case where the colleague is an employee of a partner company. Shouldn’t she be able to use the social software as it pertains to the project at hand. Federation between companies is not the answer. What if that company has not deployed social software? What if they are using a different version?

So the bottom line is social software must allow for “guests” before they become full fledged users. Of course for this to happen, the software must allow for browser based access and allow third party authentication tools like OpenID/OAuth.

Yesterday I posted about a new class of devices that I would call “micro servers”. These are inexpensive Ubuntu boxes that consume very little electricity that can be used to run different always on applications. Such a device is a perfect fir for EnThinnai. This post is a record of that experiment.

EnThinnai is a web application built around Apache, Tomcat, MySQL and Java. Ubuntu Software Center has MySQL and OpenJDK in its repository. So it was a simple matter to install them. A friend of mine gave me instructions to install Apache and Tomcat via command line interface. Finally installing EnThinnai is copying the relevant war file to the appropriate location. That is it. EnThinnai is up and continuously running now for a few days. The current version of EnThinnai is designed to be run on a server on the public Internet. But I am running Efika behind a NAT that gets dynamic IP address. So I have registered a dynamic Domain name and have to setup some port forwarding rules in my NAT. With these configurations, people can access Efika and access information that I am sharing with them. So far so good.

The development team is looking into ways to handle dynamic IP address and NAT issues as well. Once these are done, an average user will be able to setup an EnThinnai server and ready to share information with their friends and family.

Significance of EnThinna running on a home server

There have been many proposals on having a federated system of social networks and sharing information between them. Last year Diaspora* got lots of public attention and beginning of this year an Alpha version was made available. Even though the main thrust of Diaspora* is federated social networking, running on servers at home is not a major focus. Recently another effort, FreedomBox Foundation declared its focus is personal ownership of data and privacy protection and declared its intention to run their application on plug computers running at users’ homes.

EnThinnai running on Efika meets the service objectives of both these efforts. Indeed it does more in one major aspect: there is no requirement on you for me to share my information with you. Most social networks and Diaspora* require you to have credentials issued by them to access the information I want to share with you. Additionally they require some form of bilateral relationship between you and me. Contrary to this, EnThinnai uses OpenID to authenticate you and there is no requirement that you share any information with me. In fact you may not be running EnThinnai at all. Some have equated this asymmetric relationship to Twitter’s. But I feel they are different. In Twiiter, you and I share information with the whole world and you opt to be notified of my sharing independent of the fact whether or not of my option. Enthinnai allows me to share information with you (may even be just you), even though you may not share any information with me.

VoIP with no service provider

One of the features of Enthinnai is to allow my visitors to initiate text and voice chat session with me. To do this they contact my EnThinnai server via a browser to dynamically download a plugin and we two can have an IM session or a VoIP (wideband audio for those who are paying attention to such things) call. In this respect I am my own VoIP provider. If you want to try it out, please let me know. I will make myself available and we can have a chat session.

Uphill battle

In many ways solving the technical difficulties and designing the service architecture has been easy. But I have consistently encountered rejection of the idea of running servers at home. The whole industry has tacitly assumed that having centralized server farms by a big central entity is the optimal thing. Only the other day, I read that Google killed GDrive project because the files are already in the cloud. For all the criticism of poor privacy handling of central entities like Facebook, very few people consider this alternative is feasible. This from the industry that prides itself for designing a system conforming to the maxim “Intelligence at the end”.

Many have cautioned that people don’t want to deal with the complexity of running a server on their own. My counter point is that the same users own and operate more complicated machinery like automobile and other home appliances. So I hold out the hope that if we could design the system and software that hides the complexity then people will prefer to own and operate their own data sharing systems.

« Newer Posts - Older Posts »

read more today usa gambling games for money online casino real slot machine games online casino payment options online casinos that accept mastercard deposits from us online roulette for USA players first time deposit bonus casino online online slots for mac download casino casinos Classic casino games they offer online casinos that accept usa players bonus code for us casinos online blackjack real money newest casino 2013

united state https://www.euro-online.org online