Feed on

How do I get an OpenID?

It’s easy to get an OpenID; in fact, you probably already have one. If you have a Google account, you can use your profile id number as your login (which can be found in your profiles.google.com url). Similarly, if you have a Yahoo account, you can use your username as your OpenID login.

Other sources for OpenIDs include 3rd party providers like Verisign Labs. If you use WordPress to host a blog, you can also install a plug-in to be your own OpenID provider.

If you have an account with one of the above providers, then you can derive your OpenID using the following rules:


profiles.google.com/<your profile id number>


me.yahoo.com/<your username> or a customized string


URL of the home page of your blog at wordpress.com

Introducing ffonio.in

ffonio.in is a web application that people can use to have IM, voice and video chats with their friends and family. Users can run this app on their own devices such as their WiFi router, Raspberry Pi or a cloud instance like Digital Ocean Droplet. As long as friends and family have an OpenID and use a browser that supports WebRTC, they do not have to host this application themselves.

The following are highlighted features of ffonio.in:

  1. Use of OpenID for authentication. (Registered users can assign an unverified, if unsecure, “OpenID” to unregistered users in an ad-hoc fashion.)
  2. “Availability status”, in lieu of Presence. Users can present different status to different persons.
  3. Only users who have been previously authorized can initiate an IM, voice or video chat. The authorization can be changed at any time.
  4. Seamlessly move to a voice or video chat from an IM chat session.
  5. Ability for either user to mute sound or turn off camera.
  6. Ability to buzz the other user to catch their attention.
  7. Once the IM chat session has ended, the transcript is made available to both the users. (We plan to also make recordings of voice and video chats available in the near future.)
  8. The app has a built-in simple relay server (two-sided NAT) to assist in NAT Traversal, replacing the functions of a TURN server.
  9. Generate a custom reach-URL (which users can share in their email or business card) or an embed code (which users can add to their websites.)

Although our primary objective was to help individuals run their own IM, voice and video chats, this system can also be used on much larger scale, such as within an enterprise. Companies can use this product for both internal communication between employees and external communication with outside partners and customers. We plan on pursuing this direction in the near future by integrating this application with CRM systems like Salesforce and Sugar CRM.

As early as 2008, EnThinnai supported the ability to conduct IM and voice chats. At that time a Java applet was dynamically downloaded to the browser. Then the browser maintained a two-way signaling channel with the server that allowed asynchronous notification from the server to the browser – a proto Websocket you may say. The applet also contained Speex codec which was used to provide real-time speech capability – fully anticipating WebRTC.

And we were in a bind to extend this feature further. There were no freely available video codec to extend the feature to support video communication. Leading mobile devices did not support Java. Users were disabling Java due to security concerns. For us, it is a defining use case for an unregistered user to initiate a communication session with a registered user (Guest access). This means the capability afforded by the Java applet must be universally available. This is precisely the objective of WebRTC.

Now that WebRTC has reached a stable stage, we have replaced Java applet with WebRTC. So users can use any WebRTC-enabled browser to communicate under EnThinnai.

Skype is celebrating its 10th Anniversary. On this occasion, I thought it will be interesting to revisit my early comments. They were published as a guest blog post in Gigaom on March 27, 2004. But due to some server malfunction, they were lost. I am republishing that post here. I am proud to say that many of my opinions have stood the test of time, including the claim that Skype will be forced to bring all the Supernodes in house.

It is very likely that you have heard about Skype; it is even probable that you are using Skype. (Fair disclosure: I am not a subscriber of Skype.) Michael Powell, FCC Chairman suggests that the telephony market place has changed dramatically since the arrival of Skype. Is Skype really so special compared to other VoIP service providers? Of course Skype thinks so. They say that unlike other VoIP service providers, Skype has a very intuitive user interface that does not require technical skills, but is easy to configure. They also suggest that unlike other VoIP service providers, they solve NAT Traversal problem without the use of Proxies with the resultant better voice quality. Of course the clincher is that Skype is P2P and so is infinitely scalable and resilient.

Before I analyze these points, let me describe the workings of Skype based on my understanding and what is available in public.

  • There is a Global Index Server where all clients login and authenticate themselves and exchange security key information.

  • Based on this exchange, the client will be assigned a Supernode, who will maintain the presence information; Supernodes also communicate with other Supernodes while locating other end-points.

  • The clients and Supernodes use the well documented UDP Hole Punching algorithm to solve the NAT Traversal problem.

Upon a little reflection, we can see that functionally this architecture is equivalent to other VoIP architectures like SIP. Global Index Server is equivalent to the Registrar; the function described in item 2 is equivalent to Location Server and the function described in item 3 is Session Border Controller. What is more, many SBC vendors solve NAT Traversal problems using similar optimization techniques with the same rate of success. Consequently, the clients in other environments also do not require complicated configuration setup.

Skype users have commented positively about its voice quality. Global IP Sound indicates that Skype uses its codec, in particular iLBC. GIPS also supplies their codec to other VoIP clients. X-ten also uses iLBC codec. So one can get Skype like quality in other systems as well.

The Global Index Server is a single point of failure. If it fails, clients can not login. I suppose new Supernodes can not be drafted either. In my opinion, this is not a serious failure, because existing system can continue to function and a replacement GIS can be easily brought online.

But my concern regarding Supernode is more substantial. It is suggested that since the Supernodes are nothing more than other Skype clients, Skype is infinitely scalable. I submit that this may not be the case. To begin with, a client is eligible to be a Supernode only if it has enough processing power and bandwidth capacity to perform the functions of a Supernode. Additionally, it is a requirement that they be present on the public Internet or behind a “transparent” NAT and a “permissive” Firewall. I am betting that such clients will be scarce in relation to the total number of clients (a single Supernode serves around 100 clients).

If Supernodes need to have special capabilities, then it is likely that they will demand some form of compensation. It is not clear whether Skype is setup for this. Additionally, it is not clear how the individual clients are protected from a misbehaving Supernode. It is true that the media is encoded. But the Supernode is involved in the signaling phase. Since the Supernode has network connectivity to the client, it is tempting to use it for extra and unwanted commercial activity. So Skype may deploy their own Supernodes, eliminating one more difference between it and other VoIP providers.

Some have expressed reservation because Skype is proprietary. There have been previous instances where proprietary consumer items have found wide adoption without incurring huge collective cost. VCR is one of the examples that come to mind. But in this case there are some differences:

  • Alternatives, based on standards are available

  • Skype uses mostly well-known and open technologies; only the protocol semantics is proprietary

  • Even though Skype (for that matter VoIP) is naturally a “product” and not a “service”, Skype views it as service. For example, they do not allow an enterprise to use their own GIS, instead of the global one, even if communication will be restricted to internal use alone.

  • As I am told, there is no way to directly address another client, even if the IP address is known. Windows Messenger from Microsoft has the same limitation, whereas NetMeeting allowed direct communication.

In this respect also, they are just like other VoIP providers. It is disheartening to see that even those whose middle name should be P2P, think like this. I am reminded of an ad that appeared in a New York based Indian newspaper in 1982. The ad was taken by an Indian Restaurant that offered two free alcoholic drinks in exchange for ticket stub for the movie Gandhi. In summary, Skype shares the same functional architecture with other VoIP providers. It shares the same business plan and outlook. But they have artificially cloaked it in a proprietary system. I guess this is their “economic moat” to use a Buffett term. From a consumer point of view, the beauty of VoIP is that there is no moat and current technology is sufficient to realize direct IP Communications that does not require any intermediation.

Aswath Rao has 20 years of experience in the telecommunications field, having worked for leading R&D firms. He has worked on ISDN,Frame Relay, BISDN, wireless and satellite communications. For the past 5 years he has been working on VoIP related issues. Long before intelligence at the end became acceptable, he advocated “functional terminals” in ISDN. His proposal for Inter Connect Function has been incorporated in the TIPHON architecture and currently it is known as Session Border Controller. He has developed ways to offer PSTNsubscribers many of the features available to VoIP subscribers.  He maintains a blog. He can be reached at aswath@whencevoip.com

It has been reported that Telefonica will shut down Tu Me and redirect its resources in shoring up another service, To Go. People have theorized that compared to competing services from OTTs, To Me has anemic traction with uncertain revenue potential. On the other hand, the reasoning continues, To Go has solid revenue opportunities, since it accrues billable minutes/SMS from existing customer base. Two years back, I gave a talk at Telecom 2018 Workshop, in which I argued that telcos will have difficult time directly competing with OTTs and suggested an alternate approach. In this post, I revisit those points in the context of Telefonica’s decision.

We have to recognize that Telcos and OTTs are fundamentally different. OTTs are funded by risk loving VCs. They are designed to take big risks with a quick entry and just as quick an exit. They go for world domination and design their services for viral adoption.

Telcos are a study in contrast. They are established enterprises beholden to shareholders who value steady return and adverse to big risks. Furthermore, they need to be worried about cross elasticity of new services with old ones. They also have strong presence in geographically restricted areas; usually federate with other telcos in out-of-regions. But such federation is not easy to come by since potential partners may have different priorities in introducing new and speculative services. So on Day 1, a new service will have low Network effect.

It is clear that To Me experience exactly these issues and predictably they had low traction. Though I do not have verified data, it is a safe bet that they were more successful in their local regions more so than out-of-regions. Since they are marketing To Go to their existing customers, they will have better luck with that service. It allows their subscribers to access the services using multiple means of access. This way, they have become an “OTT” for their subscribers. But it is only half the solution.

If we take the perspective of friends of Telefonica’s subscribers, we will notice the missing piece. They also use multiple technologies to access the network, but in the current scheme, it all have to come via PSTN with attendant restrictive set of features due to federation agreements with their carriers. This need not be the case anymore. Supposing Telefonica allows non-subscribers to reach its network using WebRTC technology, then its customers can use new services and features with no loss of Network effect.

This is the fundamental benefit of WebRTC from the perspective of the carriers: it frees them to introduce new services and features to their subscribers without loss of network effect and without relying on federating and coordinating with other carriers.

In a recent post Chris Kranky on the need “to move on” and the need for expediency in wrapping up the first iteration of the API. Personally I would have benefited if the first iteration had been a low level spec. For I could have easily ported a custom Java applet. But given the passage of time, it is more important that there is an agreed standard. But this point is not the objective of this post. Instead I would like to focus on another of his points:

[WebRTC] wasn’t designed to be federated (namely that 2 WebRTC applications aren’t in fact supposed to talk to each other.

He makes this observation to explain the motivation for seeking low level control. My quibble is not with this explanation, but I want to take this sentence in isolation, interpret it literally and discuss it. (It is not fair to Chris, but I am just using his sentence as a prop. So it should be OK with him.)

In my interpretation, if WebRTC is not designed to be federated, then there is some deficiency and need to be addressed. If not immediately, but at some future time. But with WebRTC construct there is no need for federation. Let me explain.

Following are four main reasons why we need federation and how WebRTC handles them without requiring federation:

  1. Reachability information is not widely held, except for some selected nodes in both the systems.
    • Since WebRC address is HTTP URI, the originator’s service provider or system is not needed. The originator can directly access the destination’s system. Indeed, it is not required that the originator be part of any service provider or system.
  2. Communication between the systems may need mediation to handle incompatibilities.
    • Since the app server dynamically downloads the signaling procedures, there are no incomptibility issues on the signaling plane. I further assume that MTI codecs remove incompatibility between the browsers. In any event, any such incompatibility can be solved w/o the two systems federating.
  3. Identification and authentication of external nodes need to be mediated.
    • Since the whole construct is built on HTTP, any of the third party verification systems can be used to identify and authenticate the end-points. In this respect there is a need for federation, which is much less stringent requirement and can be easily waived by the end points depending on the use case.
  4. Since the external systems may not be trustworthy, the local system need to protect its users.
    • WebRTC has builtin security systems to protect the end nodes from malware apps. Specifically, the browser ensures that a rogue app can not assume control of the end node.

In my opinion the fact that WebRTC does away with federation is one of the important benefits and why it is going to disrupt communications industry.

For many use cases the point of this post may not be applicable. But for the use case where your contact can use a browser to reach you to communicate with you, it is very relevant. So you decide whether to read on further.

If you want your contacts be able to reach you from their WebRTC-enabled browser, you need to provide them with an HTTP URI. Just like we share our telephone number. Or email address. Or better yet, the URL of our blogs. Usually the URI will identify the location where the WebRTC app is running and also your id. For example the URI to reach me at Twelephone is


It can be as simple as this or a bit more complicated. I have a WebRTC app running on a self-hosted server. My URI there is


You may not want to share such an unwieldy URI. Since a WebRTC session starts with an HTTP message exchange, we can use redirection service and use a more memorable URI as WebRTC “number”. For example, I am using


as my URI to reach my self-hosted WebRTC app server.

We may want to do this even if it is short like the one issued by Twelephone. Unlike phone numbers, you can not port the URI from one provider to another. But HTTP redirection is a simple and straightforward solution.

The URI need not be a static one. A website can dynamically construct a URI to indicate additional, context specific information. For example, the URI can indicate the ID of the caller, which can be verified using a HTTP-based verification system. For example, this URI,


indicates the caller’s id to be www.enthinnai.com/unauopenid/anycard. Or the caller can indicate the purpose of the call in the URI itself. Or the URI can indicate the page from whence the call is initiated.

All these supplementary information is usually defined by the signaling protocol. Since the signaling protocol is unspecified, the application can decide which information will be sent and the specific format. Specifically, the app can decide to carry some of the information as part of the URI.

By now it is quite passe to claim that WebRTC will be a huge disruptive technology. Indeed, there has been a predictable backlash. In all these back and forth, very often we miss to note an important aspect of this technology: there has been a deliberate attempt to avoid specifying any messages and procedures that go across the wire to an intermediate point like server. This gives enormous flexibility to app developer in designing a signaling procedure that suits the needs of the app and at the same time do not have to worry about interoperability issues between arbitrary peers and the app. This is almost true, except for NAT/FW traversal. The objective of this post is to suggest a way to overcome this as well.

The recommended procedure for NAT/FW traversal is to use ICE, which in turn uses two servers – STUN and TURN. More importantly, ICE specifies the procedure and message format that these servers have to follow. Of course Google makes available a STUN server and free, open source TURN server implementations are available. But if for some reason an app wants to avoid these external dependencies, then the app developer has to develop them and then have to test compatibility with browsers. This takes away one of the main benefits of WebRTC.

Instead of fully developing STUN and TURN servers, the idea is to develop a simple “Twice NAT” and a clever use of Trickle ICE procedure that the browsers already support to bootstrap ICE procedure. Here, let us recall that Twice NAT maps both the origination and destination addresses, instead of just the origination (destination, resp) address on the outgoing (incoming, resp.) flow.

As part of Peer Connection procedure, Peer A will generate an SDP offer containing its host address. The app server can append to this SDP offer an address as a “fake” server reflexive address of Peer A before forwarding it to Peer B. In response to this, Peer B will generate an SDP accept containing its host address. The app server can append to this SDP accept another address as a “fake” server reflexive address of Peer B before forwarding it to Peer A. As part of ICE connectivity check procedure, Peers A and B will send connectivity check messages to these “fake” server reflexive addresses. From these the app server can deduce the real server reflexive addresses of the peers. Also, the app server can allocate two addresses at the Twice NAT as relay addresses. With these addresses at hand the app server can generate SDP offer messages to the two peers containing server reflexive and relay addresses of the other peer. Of course the peers will respond with accept messages that the app server can ignore. Since the peers have new candidates, they will perform connectivity checks on these new set of addresses.

If a peer is multihomed, then that peer will conduct connectivity check from each of the interfaces to the “fake” server reflexive address, yielding true server reflexive address of each of those interfaces.

Thus, the app server facilitates NAT/FW traversal without developing conforming STUN and TURN servers and placing the burden of compatibility solely on the peers.

This procedure is adopted from a modified procedure we were using in the Java-based RTC system in EnThinnai. It was developed during 2008.

Last week during WebRTC Expo we saw enormous activity regarding WebRTC. Even though people list many use cases, most of the demos and announcements were related to Unified Communications (UC). They were all good, but in a way disappointing because they didn’t take full advantage of rearchitecture afforded by WebRTC. Most widely held mindset seems to be to continue with the current architecture with browser-based clients making peripheral change in the clients. I would like to take comments made by Andy Abramson and Vincent Perrin as a springboard to expound my view that we have to reorient our thinking.

In a blog post Andy observes that WebRTC has the potential to “kill off” softphone business, if WebRTC apps/services fix a couple of things. He makes the following observation:

… for the most part softphones are not easy to work with, set up or manage unless you’re an IT guy. … More importantly, the services all need to do a better job with identity and management of multiple accounts. … like GrandCentral [WebRTC apps] need to give you one single sign on and manage many different identities, all in one place. That way, a customer with multiple accounts can manage their online communications life in one place. … [No one is doing] Single sign on, multiple accounts. Right now, it’s all one username, one account, one browser window. Sorry, but I for one don’t want multiple windows with multiple accounts on the same service to be running.

While commenting on an unrelated blog post, Vincent observes that

you won’t be able to be called except if you are in the right page at the right moment.

Both of the comments to a certain extent reveals a widely, if wrongly held view regarding the roles of clients and apps. In this post I would like to describe how I view the architecture and why these issues are not there under this architecture.

  1. I do not consider a web page served by an WebRTC app replaces softclient. Instead, I consider the full browser (not a single app) taken as a whole replaces softclient.
  2. The browser supports Notification API and WebRTC apps will use it to notify the user to notify events like an incoming session initiation request.
  3. WebRTC apps will use third party authentication mechanism to authenticate users. After all the standard requires this mechanism when the end users want to authenticate each other over the Peer Connection.
  4. A user’s address book is nothing more than a bookmark folder in the browser containing names of the contacts and their corresponding WebRTC URIs.
  5. The workflow for initiating a session is for the originating user to visit the the other user’s “WebRTC URI”, which will point to the app server among other things. At that time, the app server will authenticate the originator before proceeding further.

Given these, it is straightforward to see that none of the concerns expressed by Andy and Vincent are valid.

Since the apps are using Notification API to indicate incoming session request, there is no need for the user to be in the “right page at the right moment”. It is enough that the apps are properly registered so they can notify the user’s browser. Given that the browser can be concurrently “logged into” multiple id providers with the user selecting the relevant id as needed and that the apps use 3rd party authentication mechanism, we can easily meet Andy’s requirement that there be a “single sign on, multiple accounts”.

Some of us strongly believe that WebRTC will usher in a wide variety of innovative services, features and capabilities. At the same time, there are many skeptics dampen the (irrational?) exuberance. I am sure both sides will present their view points during this week’s WebRTC Conference & Expo. In this post, I would like to commemorate that conference by outlining one possible application.

As a background, we are all familiar with emergency telephone service. You know the one where you dial 911? Or is it 112? Or is it 999? On top of that, you may have to dial a different number depending on the nature of emergency – one number for Police, another Medical and yet another for Fire. How will a roaming mobile user to know which number to use?

Then there are occasions when one would like to reach the local police for non-emergency assistance, like a fender-bender. But one may not know the contact information. Indeed, I do not know the phone number of my local police station, let alone a location I am just passing through.

Finally, many communities have non-emergency community information service, sometimes called 3-1-1 service, based on the dial code used in US. Other countries have similar services, but use different access numbers.

I propose an application that can be used in these scenarios.

A user who would like to contact Police or a government agency sends an HTTP request to the app provider. That request will contain needed information like the nature of query, location of the user (as derived from the device) and other incidental information. Then the app provider can use these to locate the specific agency that has jurisdiction and can redirect the HTTP request to that agency. From there, the agency and the user can communicate using the services of WebRTC.

There are advantages in using this scheme. A roaming user does not have to know how to reach the local agency. If the request is for medical emergency, the request can carry the location (URL?) from where medial data can be retrieved. Of course this requires authentication and authorization processes, which can be easily done using multiple redirection of HTTP requests.

It should be noted that the basic requirement is that the app provider have a universal database of emergency and other government agencies for any given location. This may not be such an onerous task. For example, SeeClickFix does it for 311 in many communities.

« Newer Posts - Older Posts »

read more today usa gambling games for money online casino real slot machine games online casino payment options online casinos that accept mastercard deposits from us online roulette for USA players first time deposit bonus casino online online slots for mac download casino casinos Classic casino games they offer online casinos that accept usa players bonus code for us casinos online blackjack real money newest casino 2013

united state https://www.euro-online.org online