Drawing from a lore of technology punditry , the present evolution can be set in the perspective of three fundamental dimensions of information technology, namely processing, communication (including transmission and storage) and physical interaction (including interaction with users and with the environment).
Each of these dimensions can be seen as representative of three technology waves which came more or less sequentially, with of course a large overlap. The first two waves have already produced their most significant revolutionary effects, and we are now ushering into the first wave, opened up by the availability of sensors and actuators in standard technologies. The effect of this third wave will be to enrich, quantitatively and qualitatively, interactions between the physical world and the information/communication sphere. It is first and foremost in this respect that smart devices have a truly revolutionary potential.
From this point of view, which is that of computing proper, smart devices are first characterised by being just that, smart, in a raw, zero-degree sense, not anywhere implying autonomy or intelligence proper: they embed some (digital) processing power. As such they represent the end-point of a long- term evolution towards the decentralisation of computing capabilities.
An enlightening analogy has been drawn by Donald Norman : at the beginning of the twentieth century, the electric motor was a bulky and costly piece of hardware, and as such had to be used sparingly: one all-purpose domestic motor was available for the home and could be used to turn everything that required being turned. Centralising all computing/storage capabilities on the domestic PC, as the tendency may still exist to day, would amount to more or less the same: considering that processing power is such a scarcity that it requires to be centralised in one single device. Of course, Moore's law has been disproving that for a long time. The fetish device of the post-PC era is thus the information appliance , embodying separately some specialised processing/storage function taken over from the PC.
These information appliances need to communicate, if only to maintain the consistency of their respective information stores. As a network of distributed appliances, they may jointly recreate the overall functionality of the domestic PC. As such, they take for granted both the abundance of processing power, which is replicated, and the cheap abundance of transmission capacity, which is needed for this distributed storage and processing. They will also be used as terminals for information retrieval services and as such they are communication appliances as much as information appliances.
Yet, from a broader telecom-oriented viewpoint, the grand idea of "connecting everything to everything else"  may reach much beyond, towards the networking of various physical devices (e.g. home appliances, industrial apparatuses), which are primarily neither information processing nor communication devices, yet are already equipped with embedded  processing/storage capabilities, and may benefit from a range of entirely new, as yet un-exploited, network-based services.
This corresponds to the transformation of these (so far stand-alone) smart devices into networked devices , leveraging their direct remote interaction with users, software agents or peer devices to provide new capabilities, beyond the reach of stand-alone devices. As such, these devices need not be endowed with a physical user interface proper. User interaction with these devices can take place entirely via the network (possibly by way of other mobile, smart user -side devices belonging to the previous two categories).
Networked devices, when understood in this way, open up a brand new service domain for telecommunications. This can be envisioned to be the fastest growing domain of telecom service, as the total count of embedded microprocessors already surpasses, by a wide margin, that of human population, with a faster growth rate. The growth potential of regular human-centric telecom services is always limited by the capability of human users to communicate at either end of the communication chain, whereas these new services take human users "out of the loop": they need not be at one or the other end of the communication link, when the traffic is essentially "device to device" or "device to server".
The new category of networked devices do incorporate processing, storage and transmission capabilities, yet their defining characteristic is the integration of physical interaction capabilities of extremely varied kinds, which account precisely for their specific function in the physical environment. These capabilities can be as specific as those of dedicated hardware, such as industrial machinery or domestic appliances, or as generic as those of a location-sensing device, for example. Physical transduction is the common relevant abstraction of these capabilities: as sensing devices, they can input physical data in whatever modality as numerical values, and as actuating devices they output numerical values as physical effects of whatever kind. These sensing-actuating capabilities, made possible by cheap integration with standard silicon-based technology , (e.g. MEMS) are the "defining abundance " of the interaction era, and they will make physical interaction as widespread as processing and transmission have already become.
A special yet fundamental case of physical interaction is the interaction of devices with users. All three preceding evolutions can be re-envisioned from this point of view.
As for information appliances, the very idea of using specialised devices rather than general-purpose ones is often justified from the viewpoint of a better and more intuitive human interface made possible by streamlining functionalities of the device: a single-purpose device is also (should be, at least) a simple-purpose device, and should as such get more easily mastered by the user. The material embodiment of a single function is also more appealing than its purely abstract existence as a single menu item. In this view, the WIMP/desktop metaphor is seen as the hopelessly constricting projection of a rich and multidimensional information space into the two-dimensional simulation of an environment thought of as giving mostly access to dumb files, spreadsheets and word processors. Getting the interfaces back into the real world could mean, for example, replacing those widgets which have become so commonplace in 2D graphical user interfaces ( buttons , icons) by physical items acting of elementary pieces of tangible interfaces .
As said before, the new category of networked devices may become human-interface-less, being operated entirely remotely through other devices and exchanging information exclusively with other devices and servers, rather than directly with the user himself. This is not contradictory with the previous statement, and does not imply a re-centralisation of human interfaces.
Letting devices communicate between themselves means mostly one thing: the user is relieved of the burden of having to interact with all those devices that will surround him, as devices are let to "do their own thing" together, informing the user only when a mandatory decision is required from his/her part. Actually, most of the interaction that could have occurred before with these devices would have amounted to just requiring the user to read information from one device and re-input it into another. Networked devices make this unnecessary.
In a world of overabundant information, the only remaining scarcity is the time and attention of the user. For this reason, devices should become proactive and take actions on their own, finding the relevant information for this in their environment (including other devices).
Where classical AI has more or less failed to equip applications with some software equivalent of "common sense", retrieving and taking into account a set of very concrete information nuggets from the physical context of the application can provide the closest equivalent to an elusive lore of human intuition.
Among such elementary physical information, location is the most obvious and the most universally exploitable one. It can contribute to make interaction implicit: if the user gets close enough to an appliance or vending machine; for example, this may usually mean that he/she wants to use it, and corresponding action may be attempted, for example providing him with a control interface for this device.
Physical location becoming integral to any interface means much more: the world becomes the interface. The grand idea behind using physical location to address information thus goes much beyond the cellular-network-location-based services already on offer, and even beyond the more general context-awareness idea. In this view, the physical world is the most compelling interface metaphor to cyberspace , and geo-location is used, not only as the user's own position, but as a unifying navigation anchor and a intuitive representational tool, to make sense of the overwhelming multidimensionality of the information space.
 Taking into account this evolution, embedded processing should be construed in a narrower an d more precise sense of "embedded in non-IT device", instead of embedded in a non-general-purpose-processing device".