Feed on
Posts
Comments

NaDa and EnThinnai

A couple of days back, New York Times had a story on a recent research paper that was presented at Usenix Workshop on Hot Topics in Cloud Computing. The idea is to spin a cloud using servers placed inside the homes and use the heat generated by these servers to heat the homes. In the paper, the authors point out an earlier study that suggested the use of home routers as Nano Data Centers (NaDa) for content caching.

As stated in the NaDa paper: “The key idea behind NaDa is to create a distributed service platform based on tiny managed “servers” located at the edges of the network. In NaDa, both the nano servers and access bandwidth to those servers are controlled and managed by a single entity (typically an ISP).” It goes on to suggest that, “Significant opportunities already exist for hosting such tiny servers on ISP owned devices like Triple-Play gateways and DSL/cable modems that sit behind standard broadband accesses. Such gateways form the core of the NaDa platform and, in theory, can host many of the Internet services currently hosted in the data centers.” This has been the exact guiding philosophy as we developed EnThinnai where the candidate service is Social Sharing that provides consumer-friendly alternative to public social networks.

I think Social Sharing service based on NaDa is a better alternative than the content caching and distribution service that explored in the paper. Users may perceive that Content Cachnig and Distribution service really benefits the ISP and so may be reluctant to share their resources to offer service to others. Additionally, these gateways and modems require storage capability that may not be available readily. Social Sharing service on the other hand is directly beneficial to the hosting user and they will be willing to supply storage devices to store their content. More importantly, users will be assured that their content is at all times in their possession and privacy is assured. ISPs will be able to position this in positive light compare to privacy issues that plague public social networks.

In an article that was published in July 2010, Bruce Schneier categorizes Social Networking Data into six groups based on (broadly speaking) who generated the data, about whom, what data is that. He further states that each category will have different editorial rights and we will have different access rights in each category. It will be interesting to see how EnThinnai fares against this categorization. You can make your own comparison to Facebook and Google+. But my comparison says that with EnThinnai you are the master of your data.

Category

Storage location

Access rights *

Editorial rights

Service data Own server Self Self alone
Disclosed data Own server Self Self alone
Entrusted data 3rd party server 3rd party Self and 3rd party
Incidental data 3rd party server 3rd party 3rd party alone
Behavioral data Not applicable N/A N/A
Derived data Not applicable N/A N/A
* Access rights can further be extended by this person.

As part of his “5 Myths of Social Software”, Jon Mell dispels a myth that one needs “lots of people for social tools to be a success.” He points to this famous diagram by Chris Rasmussen Chris Rasmussen wiki-email and his own positive personal experience at a three person startup to conclude that “placing social tools in the context of their existing workflows (like email) and targeting identified business problems (even if they initially involve small groups) is far more successful than trying to get large numbers of young people using Facebook-like tools for the sake of it.”

This is a very critical point, especially since “Network Effect” is often erroneously invoked to suggest that a large social network, ipsofacto, is very critical for its success. But at the same time, social tools should facilitate innovators and early adopters to evangelize to the rest of the organization. Many tools do not allow for this. Take the case of Google Wave. In my opinion it is a great social software offering many features and capabilities. But my colleagues couldn’t be part of a single wave without committing to it fully. They can not wade into it – they have to fully submerge. It would have been nice if Wave allowed me to invite a colleague into a wave and experience it. To illustrate this point further consider the case where the colleague is an employee of a partner company. Shouldn’t she be able to use the social software as it pertains to the project at hand. Federation between companies is not the answer. What if that company has not deployed social software? What if they are using a different version?

So the bottom line is social software must allow for “guests” before they become full fledged users. Of course for this to happen, the software must allow for browser based access and allow third party authentication tools like OpenID/OAuth.

Yesterday I posted about a new class of devices that I would call “micro servers”. These are inexpensive Ubuntu boxes that consume very little electricity that can be used to run different always on applications. Such a device is a perfect fir for EnThinnai. This post is a record of that experiment.

EnThinnai is a web application built around Apache, Tomcat, MySQL and Java. Ubuntu Software Center has MySQL and OpenJDK in its repository. So it was a simple matter to install them. A friend of mine gave me instructions to install Apache and Tomcat via command line interface. Finally installing EnThinnai is copying the relevant war file to the appropriate location. That is it. EnThinnai is up and continuously running now for a few days. The current version of EnThinnai is designed to be run on a server on the public Internet. But I am running Efika behind a NAT that gets dynamic IP address. So I have registered a dynamic Domain name and have to setup some port forwarding rules in my NAT. With these configurations, people can access Efika and access information that I am sharing with them. So far so good.

The development team is looking into ways to handle dynamic IP address and NAT issues as well. Once these are done, an average user will be able to setup an EnThinnai server and ready to share information with their friends and family.

Significance of EnThinna running on a home server

There have been many proposals on having a federated system of social networks and sharing information between them. Last year Diaspora* got lots of public attention and beginning of this year an Alpha version was made available. Even though the main thrust of Diaspora* is federated social networking, running on servers at home is not a major focus. Recently another effort, FreedomBox Foundation declared its focus is personal ownership of data and privacy protection and declared its intention to run their application on plug computers running at users’ homes.

EnThinnai running on Efika meets the service objectives of both these efforts. Indeed it does more in one major aspect: there is no requirement on you for me to share my information with you. Most social networks and Diaspora* require you to have credentials issued by them to access the information I want to share with you. Additionally they require some form of bilateral relationship between you and me. Contrary to this, EnThinnai uses OpenID to authenticate you and there is no requirement that you share any information with me. In fact you may not be running EnThinnai at all. Some have equated this asymmetric relationship to Twitter’s. But I feel they are different. In Twiiter, you and I share information with the whole world and you opt to be notified of my sharing independent of the fact whether or not of my option. Enthinnai allows me to share information with you (may even be just you), even though you may not share any information with me.

VoIP with no service provider

One of the features of Enthinnai is to allow my visitors to initiate text and voice chat session with me. To do this they contact my EnThinnai server via a browser to dynamically download a plugin and we two can have an IM session or a VoIP (wideband audio for those who are paying attention to such things) call. In this respect I am my own VoIP provider. If you want to try it out, please let me know. I will make myself available and we can have a chat session.

Uphill battle

In many ways solving the technical difficulties and designing the service architecture has been easy. But I have consistently encountered rejection of the idea of running servers at home. The whole industry has tacitly assumed that having centralized server farms by a big central entity is the optimal thing. Only the other day, I read that Google killed GDrive project because the files are already in the cloud. For all the criticism of poor privacy handling of central entities like Facebook, very few people consider this alternative is feasible. This from the industry that prides itself for designing a system conforming to the maxim “Intelligence at the end”.

Many have cautioned that people don’t want to deal with the complexity of running a server on their own. My counter point is that the same users own and operate more complicated machinery like automobile and other home appliances. So I hold out the hope that if we could design the system and software that hides the complexity then people will prefer to own and operate their own data sharing systems.

Smartphones and tablets have thus far dominated the discussion on the topic of post-PC devices. These devices are expensive, mobility focused and mainly facilitates users with consuming information. But I would like to consider another set of devices which are inexpensive to own and operate, stationary and distributes information.

There are already examples of such devices, though they are not inexpensive. TiVo and Slingbox are two of the well known examples. Both of them served video content using proprietary hardware. They were expensive to develop from R&D point of view as well as marketing them. Since they defined new product categories with high consumer cost, it took a long time for them to get market traction. It turned out TiVo was more successful than Slingbox and the service concept got adopted in other boxes. In other words, introducing single function boxes are expensive and risky. The story is repeated in media streaming boxes and Pogoplug is trying its hand with NAS. Though it may not appear so at first blush, there are other examples: Home Monitoring, cordless base stations, WiFi routers, Print servers and VoIP ATA/Clients.

All these examples have somethings in common. All are essentially software applications that require an always on hardware that is inexpensive, low in power consumption and operationally silent. If such a hardware platform is available that too from multiple vendors, then the same sort of “App Store phenomenon” can happen in this segment as well. I am here to report that such a platform is here and available now.

For about two years, I have been following the developments related to “plug computers” put forwarded by Marvell. They have put out a reference design built on an ARM processor. There have been reports that Marvell expects that plugs will retail for as low as $49. Pogoplug is built around this design and Chumby is also a simliar device. But they are all one of a kind. They are closed both in hardware and software. Yes, Chumby allows third party to build Flash-based applications, but that is all. Plug computers are not widely available. Marvell identifies a couple of third party OEMs who can build products based on their reference design. But there are no generally available products targeted at the consumer market. But last month I came across a device called Efika MX Smarttop, marketed by Genesi.

The platform is very similar to plug computers. It is a compact device measuring 160x115x20mm. It is built around Freescale i.MX515 (ARM Cortex-A8 800MHz), with 512 MB RAM, 8 GB SSD. It has 10/100 Mbps Ethernet, 802.11 b/g/n WiFI, a SDHC card reader and 2 USB 2.0 ports. A display unit can be connected via a HDMI port. It comes with a derivative of Ubuntu 10.10. In other words, for all purpose, it is a PC with a full fledged OS. The unit consumes about 5W to fully operate. The device currently retails for $129 from their website. But I suspect that there is room for the price to be much lower once the volume picks up.

Another noteworthy thing is that Ubuntu 10.10 has something called Ubuntu Software Center. It is like Apple App Store or Android Marketplace. It is easy to discover and install software. No need to fudge with sudos and apt-gets.

So what can one do with Efika. The Software Center has MySQL available. Even though Apache is not listed, there is no reason why one can not be made available. This means, users can run their web sites on Efika. With open source software, attaching a USB drive one can make Efika into a NAS. I am able to install and run Twinkle, a VoIP client. So with ah appropriate USB device with an FXS, one can make it into an ATA. I am sure one can make it a DECT base station or a WiFi router. But they require additional hardware. So the next generation of Efika must have something like “expansion slot” from the PC era. With this specific hardware or processing power can be augmented to support specific application. For example, for DECT base station, the additional hardware will perform the required radio function and also provide FXO port. To make it a wifi router, it will be required to have LAN ports. I envision that such application specific hardware will be made available by the corresponding app developer.

All in all it is like PC market all over with one big difference: the OS is free and open source. There is no one entity that is in control, save for ARM. I really hope this particular segment of consumer electronics sees lots of action.

There is a report that EU will be funding a research effort into Cloud Storage Technologies to the tune of $21.4 M. This project will be spearheaded by IBM’s research team in Haifa and it will take three years for the projet to complete.

The following paragraph in that story is my focus today: “The project will explore other advanced features for cloud storage, such as flexible but secure access control. For example, a company may want to distribute a video to participants of a conference, but they may not want to give access credentials to those people for its own network. The project will look into ways the video can be shared securely under those conditions while also being accessible by people through any device, Kolodner said.”

One of the upcoming features of EnThinnai is applicable for the described scenario. As was noted in a previous post, Notes in EnThinnai will have three parameters will be used to control access. The first is the standard “To” parameter identifying specific people that are allowed to access the content. This parameter will contain a list of OpenIDs of the individuals. The second parameter is “responsibility tag”. This will identify the authority resposible for issueing “responsibility”. The third parameter is “interest” tag where individuals declare their interest in material associted with a keyword.

The idea is when a company wants to distribute video to participants of a conference, they will create a “Note” and identify the conference organizer as the issuing authority and the name of the conference as the associated tag. When somebody tries to access this Note, the system will use OpenID procedure to authenticate the visitor and then use OpenID Attribute Exchange to query the conference organizer to confirm the visitor’s participation in the conference. Once this done, the system will allow access to the Note. Use of user-centri id like OpenID ensures that access is flexible and at the same time using an issuing authority to control acccess makes it secure.

EnThinnai: A VRM Tool

Project VRM, a Berkman project has been endeavoring to bring forth a set of “tools to make markets work for both vendors and customers in ways that don’t require the former to “lock in” the latter was developed in the The Cluetrain Manifesto.” Doc Searls, has been spearheading the project. In a recent blog post he observed that it is much more than “reciprocal” of CRM. He states that VRM is a set of “tools that give individuals independence from others, yet useful means for engaging with others – especially organizations, and among those especially sellers. But the core elements are individuals and independence.” To commemorate the upcoming the first VRM+CRM Workshop, I thought I will elaborate how EnThinnai can be used as a VRM tool.

As was seen in the previous post, EnThinnai allows an individual to share digital information with others. Access to such shared information can be controlled by three parameters: individuals’ OpenIDs, responsibility tags administered by one or more authorized entities and interest tags. Additionally EnThinnai also provides real-time communication tools like text and voice chat. These tools also operate under a permission controlled scheme. Any permitted party can initiate a communication session with the user of EnThinnai.

Now let me take a specific use cases and describe how a customer can use EnThinnai to request a product or service. To this end, the customer has to create a “stream” as described in the previous post. Furthermore, the customer can identify individual know vendors in the “To” field. Alternatively, if the customer is soliciting proposal from a group of vendors nor previously known, the customer can use the “Responsibility tag”. Supposing the customer is interested in a plumbing job, she can use “Plumbers in 01234” as the responsibility tag with the authorizing agency to be Yellow Pages (or Yelp or Google Places or BBB). Once the customer creates such a stream, all the intended parties will be notified of this stream and they can opt to get the full content from the customer’s server. Since EnThinnai allows authorized parties to post replies and makes it part of the stream, both the customer and the vendors can get the full history of the transaction at any time.

Now consider the case of a customer who would like to post a review of a restaurant. He could create a stream containing the review and identify “Italian restaurant in 01234” as the interest tag within the community of Google Places (or Yelp or Superpages). Here again subsequent visitors can use the reply mechanism to add to the original review.

In a much earlier post Doc Searls has enumerated ten principles behind VRM. It is worth to calibrate EnThinnai against these principles and score how well it meets them.

  1. VRM provides tools for customers to manage relationships with vendors. These tools are personal. They can also be social, but they are personal first.

  2. A stream in EnThinnai need not be shared with anyone. It could be just a record for the benefit of the customer. In this respect it is a personal tool. Of course it also allows the customer to share with one or more specific or loosely defined group of vendors and other customers.

  3. VRM tools are are customer tools. They are driven by the customer, and not under vendor control. Nor to they work only inside any one vendor’s exclusive relationship environment.

  4. EnThinnai is not controlled by a single vendor. Indeed even the operator of EnThinnai is not in control. The customer is at liberty to specify any individual or authorizing agency.

  5. VRM tools relate. This means they engage vendors’ systems (e.g. CRM) in ways that work for both sides.

  6. Since streams are accessed using standard HTTP protocol, any browser based CRM can easily incorporate ways to access streams that it gets notified.

  7. VRM tools support transaction and conversation as well as relationship.

  8. As noted, EnThinnai allows permission based real-time communications enabling conversation.

  9. With VRM, customers are the central “points of integration” for their own data.

  10. Data is in only one place, at the customers’ server.

  11. With VRM, customers control their own data. They control the data they share, and the terms on which that data is shared.

  12. Nominally data is stored only at the customers’ server. It is expected that others who are allowed to access the data will adhere to this principle. It will be a breach of trust otherwise.

  13. With VRM, customers can assert many things. Among these are requests for products or services, preferences, memberships, transaction histories and terms of service.

  14. This was described in the use case.

  15. There is no limit on the variety of data and data types customers can hold — and choose to share with vendors and others on grounds that the customer controls.

  16. True.

  17. VRM turns the customer, and productive customer-vendor relationships, into platforms for many kinds of businesses.

  18. Need operational evidence and so will take time.

  19. VRM is based on open standards, open APIs and open code. This will support a rising tide of activity that will lift an infinite variety of business boats, and other social goods.

EnThinnai uses open standards. We have yet to define APIs, but when we do it will be open. We have not made the code open and at this time there are no plans for opening the code.

It is apparent that EnThinnai meets almost all of the principles set forward for VRM. The way EnThinnai is setup, no single entity can have a dominant control. Since we use OpenID and third party authorization, artificial network effect is removed. The whole Internet could be part of every individual customer’s network.

Background:
During the recent Enterprise 2.0 Conference that took place in Boston, there was a panel called Microsharing: It is All About the Tools. It is Not About the Tools. It was moderated by Marcia Conner. Stowe Boyd felt that the panel “demonstrated that there is widespread disagreement, confusion and even antipathy about streams in business.” So he wrote blog post enumerating the characteristics of Streams, which is an abstracted service concept of Twitter and also highlighted the differences between Streams and email.In this post I argue that indeed business would benefit from the service concepts of both Streams and email and I propose a service concept that integrates them.

Access Control: Publisher vs. Consumer:
The first defining characteristic that Boyd identifies is the “asymmetric relationship” widely attributed to Twitter. But he points out that this is derived from the public blogging model. Interestingly he dismisses the limit of character count, another characteristic of Twitter as not the most productive distinction. He makes it clear that the real focus should be on the way content is published and consumed. Content creators publish with no specific intended recipients. Content consumers have their own way to filter from this vast collection of content with no a priori agreement with the publishers. Certainly, the publishers can facilitate consumers’ filtering process with other techniques like hashtags. But the critical thing is that the publishing and consumption processes are independent.

Boyd contrasts this to email where the publisher determines and selects the set of consumers. For him this is a critical flaw. If streams are elective on the consumers’ side, email is elective on the publishers’ side. If streams are inherently more distributed and bottom-up, email is inherently more centralized and top-down.

But I am uncomfortable with this categorical dichotomy. If Twitter is a prototype of Streams, it may be instructive to note how it is being used by its users, especially because Twitter users are well known to develop adhoc conventions to overcome some of the limitations. Even though Twitter streams are public and anybody can access them, users feel certain tweets are private and meant for a single individual. This is met by “direct message” (DM). In a business environment, the need for privacy is more acute. Businesses have fiducial and legal requirements to keep certain messages confidential. Only the publisher can know the level of restriction. Secondly, the general understanding in Twitter is that it is possible for a person to miss a particular tweet. It is well known that tweets are phatic. To ensure that a particular person reads a tweet, publisher usually uses an “@message”. Twitter’s web interface and almost all third party clients list @messages (“mentions”). This is a call for a specific consumer to pay attention to a particular tweet, but decided by the publisher. Thirdly, as Boyd notes, publishers can use hashtags to telegraph the intended audience for a tweet. All these point to the need for “elective on the publisher’s side” as well.

To summarize, Streams must allow for different level of access restriction: all the way from free access to free within a domain to restricted to people with a certain responsibility to a set of identified people.

Tummlers: Individuals and Tags too:
Kevin Marks talks about the role played by “Tummlers” in expanding the conversation. Since one routinely reads tweets from only a set of people, it is possible to be stranded in a Twitter island. But so called Tummlers play the role of bridging these islands. Usually Tummlers retweet to spread information from one island to another. But ever inventive Twitter users have found another way – hashtags. Publishers attach hashtags to their tweets and others, even non-followers can search for a specific hashtag term. These tags can be viewed as “interest tags” as expressed by consumers. In other words, a publisher is saying that a tweet will be of interest to those who are interested in the tags identified in the tweet. But business context requires another kind of tags. In keeping with the requirement that businesses may have to control access, publishers may have to control access to only those whose area of responsibility includes those identified in what I call “responsibility tags”.

Accordingly, in the new service access rights will be determined by three parameters: A “To” list as in the traditional email, a list of responsibility tags along with the identification of the authority that issued the responsibility and finally a list of interest tags. A publisher has to identify these three parameters and the specific logical combination that should be applied.

Let me elaborate with a few examples. If the publisher has put Aswath in the To list, acme.com/marketing in the responsibility tag and VoIP in the interest tag and the logical combination, is “AND”, then Aswath can access this post only if acme.com has asserts that Aswath has marketing responsibility AND Aswath has expressed an interest in VoIP. On the other hand if the logical combination is “OR”, then Aswath or anybody who has marketing responsibility according to acme.com or anybody who is interested in VoIP can get access to this post. Of course the logical combination can be a bit more involved.

Anatomy of a Stream:
As was pointed out earlier, Boyd states that Twitter’s size restriction may not be relevant for businesses. Dave Winer has been lobbying for a long time that Twitter should allow for metadata. He points out that shortened URLs are attaching pictures or other media via URLs are examples of metadata. Twitter itself has announced plans to introduce a new feature called Annotated Tweets. Not withstanding all that, there is a real benefit in capturing the main idea of a post in a pithy comment. This allows the reader to quickly scan many messages before deciding to select a subset of them to dig deeper.

So Streams should adopt “Subject” field used in email, but restrict the length of Subject field to 140 characters. Furthermore, the recipients will first see only the Subject and possibly an initial segment of the post, but no more than 140 characters. A recipient can access the full post if so desired.

Distribution of Streams:
email and Streams differ in how they distribute messages. In email the sender explicitly identifies the list of recipients. Then the sender’s server distributes the message each of the recipients’ servers individually. On the other hand distributed systems like XMPP use Pubsub like mechanism. More recently, this mechanism is further refined with PubsubHubBub. In this mechanism the originating server uses intermediary Hubs to reach the ultimate servers. In a business environment either of the schemes have some undesirable qualities. Since the email system delivers the complete message to the recipients, one of them can forward it further down the line. The originator has no control or record of such distribution down the line. In the case of PubSubHubBub, the intermediary nodes have access to the message. even if the message is encoded, the mere fact that two enterprises/individuals are communicating itself may be potentially sensitive information. So an alternative, efficient mechanism must be used that takes into consideration privacy concerns.

When Streams identifies an individual recipient, then it should first determine whether that recipient is a Streams user. If so, the Subject of the post along with the URL to retrieve the complete post will be posted to the recipient’s server using webhooks. If the recipient is not a Streams user, then the creator will be notified to inform the recipient using some other method like an independent mode of communication. The address resolution algorithm may resolve to a group of people identified by a Responsibility tag under a domain. In this case, the domain may not revel the individuals associated with the Responsibility tag since that could be a sensitive information. In this case, the Subject of the post and the retrieve URL will be deposited to the domain which in turn will distribute to the relevant individuals. Finally if a group is identified by an Interest tag under a domain it is possible that the group may be a large number. So in this case, the Subject of the post and the retrieve URL will be deposited to the domain, which will distribute to the individuals.

Threaded Stream:
Traditionally email systems treated messages individually. Then Google introduced the concept of threaded messages in GMail. Still it is from the perspective of the recipients. If one person is excluded from the reply then that person looses the threaded view. We should also note that an email thread captures organizational memory. This organizational memory would be of help to a new person joining the group.But current email systems are not very effective in facilitating transfer of knowledge base. Streams musty endeavor to provide this.

Accordingly, Streams should keep responses to a message along with the original message, identifying the author of each of the responses. Further, the original creator of a message must be able to add new recipients at a later time.

Summary:
1. Publisher can specify the audience for a stream using three parameters: Individuals identified by “To” field, Responsibility Tags and Interest Tags.
2. A stream will contain a Subject field that summarizes the content of the stream and is of limited length.
3. Stream will also contain a field called Body. It can contain arbitrary digital content and can be of arbitrary size.
4. Recipients can be individuals or a group whose members can only be determined by a third party domain.
5. Individuals and third party domains will be given the contents of the Subject field and an URL to retrieve the stream. When somebody tries to access the URL, the user will be authenticated to maintain the integrity of the access control stipulated by the publisher.
6. Any followup exchange to a stream will be appended to the Body of the stream.

Shameless Self-promotion: These and other thoughts were the motivational forces for EnThinnai. A showcase implementation has captured all of the requirements except for Responsibility and Interest Tags.

To data almost all Presence serving system push a user’s Presence status to others. It is widely considered to be an efficient mechanism rather than individuals periodically polling the Presence status of all of their friends.But this is based an a oversimplified analysis that does not take into consideration accepted social etiquette and potential security and privacy issues. It is better that buddies pull the Presence information of a user directly from that user’s Presence server. To further enhance the user experience, Presence server must allow for buddies to subscribe for changes in a user’s Presence status with the approval of that user.

Presence service is universally designed as a Push service. Typically, User Clients report the user’s network connectivity and keyboard status to a central server which in turn pushes to all the buddies of that user. Some services further allow users to customize the status info, either globally or to a particular buddy. I contend that this is not a preferable method as it is insecure and introduces anti-social behavior.

Consider the following scenario: Abel and Betty are buddies with each other. This allows Betty to constantly monitor Abel’s Presence status, so much so can reconstruct Abel’s timeline. In real life, even if Abel and Betty are close friends, Betty’s behavior will be considered abnormal as dramatized by Lucy and Holden:

Indeed the situation is worse. The real comparison would be the case where Betty were to observe Abel using a periscope without Abel knowin about it. That would be a real anti-social behavior. But that is exactly what the Push system allows.

This problem further compounds when Presence information is shared between federated networks. How does one network ensure that the other network maintains the confidentiality of the shared information? Specifically, if Abel is sharing different status information with Betty and Charles belonging to the same federated network, the expectation is that Charles will not be able to access the information shared with Betty. What about all other members belonging to the federated network who are not Abel’s buddies? Andy Zmolek points out this scenario in one of his blog posts.

This can be ensured only after extensive testing, leading to a time consuming routine before two networks can federate. But this is counter to the objectives Unified Communications and Collaboration (UCC) of which Presence is a component.

Given these issues, I wonder why Push system is still being used. I have raised this point with a few people. The consistent response is Push is considered to be an efficient way of distributing seldom changing Presence info; otherwise all the clients will be polling all of their buddies’ Presence info, overloading all the servers. This is true only because they have fixed a specific use case scenario where the user is able to ascertain the Presence info of all of their buddies with a single glance. For this small convenience, we are paying a huge price.

But there is an alternative that addresses the concerns described earlier with a small change to the user interface. The alternative is for Betty to query Abel’s Presence server whenever she needs that information. Since Abel’s server will log all such requests, Betty will be discouraged from stalking Abel, except when she is desperately trying to contact Abel.

Federation is not a big problem anymore since the server belonging to the federating network is not involved in this transaction. Of course Betty and Charles can exchange and compare the information they received. But that happens in real life as well. We as social beings have developed social norms to handle such situations.

Finally if the User clients have a simple mechanism for a user to query Presence information of a single or a group of buddy, then this would be an acceptable compromise for other benefits. But there is one technical issue. Since Betty will be querying Abel’s server, it must be able to authenticate Betty. Here my suggestion is to use OpenID/OAuth. By the way this is how EnThinnai serves Presence information of its users.

Pulling of a user’s Presence information can be further enhanced by allowing for Betty to subscribe to Abel’s Presence information. For example Betty would like to be informed when Abel’s Presence info changes or it contains a specific string and the like. Of course such a subscription needs to be approved by Abel before the updated information is delivered to Betty. Of course this mirrors what happens in real life interactions.

In summary, we should not push a user’s Presence information, but instead buddies must be allowed to pull after they are properly authenticated. Servers should also accept subscription requests which will be responded to after the user has given permission. Finally, the server should log all requests and make it available to the user.

Over a series of posts in his blog Confused of Calcutta, JP Rangaswami presents his thoughts on how corporate IT department should get inspiration from Facebook to develop and deploy software infrastructure that emerging workforce will demand. I call the collection of posts “facebook Manifesto” (the case of the letters being used advisedly). The purpose of this post is to compare EnThinnai against this Manifesto. Admittedly, EnThinnai has some gaps to fill. In some cases, we have taken some of the ideas a step farther and in a few cases there are fundamental breaches. This posts catalogues them in an attempt to develop a road-map for our future development plans.

The set of JP’s posts that are relevant to this analysis are:

  1. Facebook and the enterprise: Part 1
  2. The Facebookisation of the enterprise
  3. More on the Facebookisation of the enterprise
  4. Walls and bridges: even more on Facebookisation

Even though you will enjoy and benefit from reading these original posts, let me capture the main points here for ease of reference.

  1. Tomorrow’s workforce is experiencing and learning social skills in Facebook, which seem to have different collaboration philosophy than what is traditionally practiced in the corporate world. Just as corporations have supplied old world social facilitators like watercoolers and canteens, modern corporations must supply social network platforms. In his opinion the platform must support publishing, search, fulfillment and conversation. He calls them Four Pillars.
  2. An enterprise worker would prefer to see all theses things for a quick review: news events, a unified inbox, appointments, communities, consulting and sharing views and opinions.
  3. The unified inbox is enabled with both a white list and a blacklist.
  4. Colleagues’ presence information
  5. Search and discoverability tools
  6. Easy to mash-up third party applications
  7. Ability to federate with customers, partners and supply chain
  8. Total flexibility in privacy and access control

Given these broad objectives, the Manifesto also identifies a specific set of features:

  1. A personal token that can be used for all the activities in the company
  2. A place to create personal profile that allows for discoverability
  3. Create and maintain a social graph
  4. PIM – Address book, calendar and to do list
  5. Real-time communication with the members of the social graph
  6. Publication platform
  7. News feed

EnThinnai uses company supplied OpenID for authentication. So this can be used for other activities both within the company and also at external sites. Additionally, we can use the Attribute Exchange mechanism used in OpenID to convey HR supplied authorization information like Job Title, scope of control and the like.

EnThinnai allows for its users to create a rudimentary profile and also a set of contact information. There is no address book in the traditional sense. EnThinnai maintains a list of Contacts, their OpenID and the name of their EnThinnai server. When a user would like to access the contact information of a specific person, it will retrieve the information in real time. The current version does not have Calendar, but it is in our road-map.

EnThinnai has its own version of social graph but it is very different from the normal one. Unlike many other social graphs, in EnThinnai the concept of buddy is unilateral. If B is in the social graph of A does not mean that A is in B’s. Indeed, B may not even know that she is in A’s social graph. B may not even be a member of EnThinnai. Of course B is identified by her OpenID; so it requires that she have an OpenID. (Though the “follower” relationship in Twitter is also unilateral, there is a fundamental difference between these two.)

EnThinnai allows real-time communication with Availability status, text-chat and voice communication between users of EnThinnai and members of their social graph. It is to be noted that this is done with no requirement on pre-installing a client by either parties. Oh, we use a wideband codec for voice chat. The text chat is a persistent chat in the sense that the chat session can be continued at a later time and the whole session is saved.

The main objective of EnThinnai is to share digital information. Accordingly, users of EnThinnai can publish documents, share files with explicit access controls. Furthermore people allowed to access the published information can post comments. EnThinnai is planning to integrate recently open spurced Etherpad so it is possible to edit a document in real-time.

If two people are mutually in each others social graph, but are under different EnThinnai deployments, then the update information is exchanged between the servers using webhook technology. This simple mechanism is used to federate multiple EnThinnai deployments.

So over all we are very satisfied in how we meet the objectives of the Manifesto. Still we have lot more to do and we are very encouraged.

« Newer Posts - Older Posts »

read more today usa gambling games for money online casino real slot machine games online casino payment options online casinos that accept mastercard deposits from us online roulette for USA players first time deposit bonus casino online online slots for mac download casino casinos Classic casino games they offer online casinos that accept usa players bonus code for us casinos online blackjack real money newest casino 2013

united state https://www.euro-online.org online