The peer-to-peer (P2P) model, also known as distributed transient networking or ad hoc, an alternative to the client/server model, involves spreading responsibilities across a collection of network nodes, possibly transient, in varying degrees of decentralization. In so-called "pure" P2P, every node is equal, each acting as a client and a server. In hybrid P2P, one or more central servers act as routers and dispatchers to the various peers, each having specialized responsibilities or possessing specific resources. These models are especially resilient because they distribute the risk of failure from any single node. Some common applications using this networking approach are peer-to-peer file-sharing protocols such as BitTorrent and Gnutella; the Usenet Internet discussion system; Grid Computing for spreading computational workloads across a collection of resources; and sensor and mesh networks for propagating information from one node to another. There are many more.
Although P2P approaches have many merits, several challenges that require you to develop robust strategies must be overcome. These include mechanisms for joining (or forming) the network; publishing and discovering resources (for example, bandwidth, files, services, and processor cycles); limiting, smoothing, or metering resource utilization; fault handling and recovery; security and attack mitigation; and a myriad of others. (A full discussion of each of these topics is clearly beyond the scope of the present text.) Several protocols attempt to address each of these, to varying degrees, and may be useful conceptual starting points for your own projects. These include the UPnP architecture for device interoperability independent of network technology, Apple's Bonjour for local area network (LAN)-based service discovery, or one of the many approaches offered by the sensor network community, which has developed fairly sophisticated and robust mechanisms specifically tailored for resource-constrained network-embedded devices.