Challenges

The UniServer project tries to address the following main challenges.

Challenge-1: Variations and Pessimistic Margins

The aggressive technology scaling has worsened static and dynamic variations which make circuits prone to failures by making harder to meet the specifications. Conventional wisdom tries to address those by adding safety margins in voltage and frequency for ensuring correct operation. The added safety voltage margins increase energy consumption and force operation at higher voltage or lower frequency. They may also result in lower yield or field returns if a part operates at higher power than its specification allows. The voltage margins are becoming more prominent with area scaling and the use of more cores per chip large voltage droops, reliability issues at low voltages (Vmin), and core to core variations. The scale of pessimism is also observed on recently measured ARM processors revealing more than 30% timing and voltage margins in 28nm.

Challenge-2: Stagnant Power Scaling

For over four decades Mooreʼs Law, coupled with Dennard scaling ensured the exponential performance increase in every process generation through device, circuit, and architectural advances. Up-to 2005, Dennard scaling meant increased transistor density with constant power density. If Dennard Scaling would have continued, according to Kumey, by the year 2020 we would have approximately 40 times increase in energy efficiency compared to 2013. Unfortunately, Dennard Scaling has ended because of the slowdown of voltage scaling due to slower scaling of leakage current as compared to area scaling. Combined leakage and variations have elevated power as a prime design parameter. If we need to go faster, we need to find ways to become more power efficient. Simply put, the more energy efficient a chip is, the more functionality with higher utilization can be used in it and, naturally, more tasks can be serviced by it.

Challenge-3: Sustainable Internet

Presently, most of the processing and storage in the Internet is performed in the cloud with massive data-centres that are located in remote locations and occupy an area of a few football fields, contain tens of thousands of servers, consume electricity of a small city and utilise expensive cooling mechanisms. Such data-centres will not be sufficient in the IoT era. According to Cisco, by 2019 there are going to be more than 24.3 exabytes of data being generated every month, which are expected to grow even more as the number of smart connected devices increases. For one, tethering all the required bandwidth to transfer the data from the devices to a centralized data-centre would be very costly. It is difficult to imagine how to transfer these data over the existing public networks without further substantial investments to expand the existing networks, which might take years to complete if at all technologically feasible.

Response latency, therefore, becomes an issue since many IoT applications value relies on immediate response (for instance smart traffic control and accident avoidance). In particular, it appears that it will become increasingly vital for the deployment of small form factor energy efficiency data-centres near sources of Big Data to process and filter data locally rather than sending them to cloud centralized data-centres and flooding with data the public Internet communication infrastructures. Several industrial players (Cisco, IBM, Intel, etc.) have realized the challenges towards IoT and have recently introduced the term Edge or Fog computing, which is being promoted as a new technology that could complement cloud computing and bring cloud computing services closer to the rapidly growing number of user devices.

Challenge-4: Availability and Dependability

The mere size of the Future Internet will result in numerous of its hardware resources experiencing a failure at any given time. For continuous trust and investment in the Internet economy it is essential to guarantee the availability of an internet service even when some of the hardware resources that are running it are down due to failures. Equally important provisions need to be made so that errors do not compromise the integrity of a service (e.g. providing wrong responses). Therefore, it is critical to design the next generation servers and the accompanied system software of cloud data-centres in order to deal with the potential hardware faults; and we need to do it in a minimally intrusive manner for not compromising programmability and well proven design flows and toolsets.

Challenge-5: Privacy and Security

Many future IoT services might not want to risk revealing some highly confidential business or personal data to a third entity and thus they might not be willing to send them to a cloud data-centre in an unknown location but rather analyse and store the outcomes of their analysis within a server in their premises. The realization of such private clouds relies on the availability of low running cost, small and easy to deploy servers.


Copyright © 2024 UniServer Consortium. All rights reserved.