Evolution of the Data Center

August 2, 2018 at 8:00 AM

 

Just as computers, phones and everything else in our world has made advancements over the years, so have data centers. Data centers play a critical role in networking and have evolved to allow businesses better access to their data with improved usability while being easier to manage. Traditionally, data centers were only as good as the physical space they took up, they were restricted by server space and having enough room for hardware to be stored. With today’s technological advancements, they are less space-centric, and more focused on the cloud, speed and flexibility. Here, we’ll take a look at the evolution of the data center, from inception to the realm of future possibilities.

 

The Early Days

 

The earliest data centers were large rooms filled with computers that were difficult to operate and maintain. These primordial behemoths needed a special environment to keep the equipment running properly – equipment that was connected by a maze of cables and was stacked on racks, cable trays and raised floors. Early data centers also used a large amount of power and generated a lot of heat, so they had to be cooled to keep from overheating. In 1964, the first supercomputer, the ancestor to today’s data centers, was built by Control Data Corporation. It was the fastest computer of its time with peak performance of 3 MFlops, sold for $8 million and continued operating until 1977.

 

1970s

 

The 1970s brought the invention of the Intel 4004 processor which allowed for computers to be smaller. And in the 1973, the first desktop was introduced, the Xerox Alto. Although it was never sold commercially, it was the first step toward eliminating the mainframe. The first LAN was brought to life in 1977 in the form of ARCNET, which allowed computers to connect to one another with coax cables that linked to a shared floppy data storage system.

 

1980s

 

The personal computer (PC) was born in 1982, with the introduction of the IBM model 5150. This new, smaller computer was a far cry from the expensive and expansive mainframes that were hard to cool. PCs allowed organizations to use desktop computers throughout their companies much more efficiently, leading to a boom in the microcomputer industry. Plus, in 1985, Ethernet LAN technology was standardized, largely taking the place of ARCNET. 

  

1990s

  

The 1990s started with microcomputers working as servers and filling old mainframe storage rooms. These servers were accumulated by companies and managed on premise, they were knows as data centers. The mid-90s saw the introduction of the Internet, and with it came a demand for faster internet connections, increased online presence and network connectivity as a business requirement. To meet increased demands, new, larger scale, enterprise server rooms were built with data centers that contained hundreds or thousands of servers working around-the clock.  In the late 1990s, virtualization technology originally introduced in the 80s was revisited with a new purpose in the form of a virtual workstation, which was comparable to a virtual PC. Enterprise applications also became available for the first time through an online website.

 

2000s 

 

By the early 2000s, PCs and data centers has grown exponentially. New technology was quickly emerging to allow data to be transmitted easier and faster. The first cloud-based services were launched by Amazon Web services, which included storage, web services and computation. There was also a growing realization of the power required to run all of these data centers, so new innovations were being introduced to help data centers be more energy efficient. In 2007, the modular data center was launched. One of the most popular was from Sun Microsystems, which has 280 servers in a 20-foot shipping container that could be sent anywhere in the world. This offered a more cost effective way for corporate computing, but also refocused the industry on virtualization and ways to consolidate servers.

 

2010s

 

By the 2010s, the Internet had become engrained in every part of day-to-day life and business operations. Facebook had become a main player and began investing resources in trying to find ways to make data centers more cost and energy efficient across the industry. Plus, virtual data centers were common in almost 3/4 of organizations and over 1/3 of businesses were using the Cloud. The focus then shifted to software-as-a-service (SaaS), with subscription and capacity-on-demand being the main focus, instead of infrastructure, software and hardware. This model increased the need for bandwidth and the creation of huge companies providing access to cloud-based data centers, including Amazon and Google.

 

Today, the Cloud appears to be the path we are headed on, with new technology being introduced and the implementation of the IoT becoming more of a reality every day. We’ve definitely come a long way from the first gigantic mainframe data centers, one can only imagine what the next 60 years of innovation will bring.

 

What You Need to Know About WiMAX 802.16

July 26, 2018 at 8:00 AM

 

In the IEEE’s world of standards, 802.16 is dedicated to the global deployment of broadband metropolitan area networks. The technology for this standard has been named WiMAX (Worldwide Interoperability of Microwave Access), it is used for long-rage wireless networking for mobile and fixed connections. Though not as popular as Wi-Fi or LTE, WiMAX has much to offer.

 

When compared to similar technologies, WiMAX offers low cost and increased flexibility. It is an OFDMA-based, all IP, data-centric technology ideal for use in 4G mobile. WiMAX can be installed with shorter towers and less cabling, which supports city or country-wide non-line-of-sight (NLoS) coverage. This cuts down installation time and saves on cost when compared to standard wired technology such as DSL. In addition to fixed connections, WiMAX service is offered through a subscription for access via devices with built-in technology. Currently, WiMAX is in many devices such as phones, laptops, Wi-Fi devices and USB dongles.

 

WiMAX is capable of speeds up to 40 Mbps over a distance of several miles. WiMAX can also provide more than just internet access, it can deliver video and voice transmissions and telephone access. All of these capabilities, plus lower cost and faster installation times make it an attractive option for areas where wired internet is too costly or not available. WiMAX can also be used in several other ways: as a backhaul to transfer data through an internet network, as a replacement for satellite internet for fixed wireless broadband access and for mobile internet access comparable to LTE.

 

After many revisions, WiMAX has now evolved into its most current version: WiMAX Advanced, which is backwards-compatible with previous versions (WiMAX Release 1.0 and 2.0). WiMAX Advanced utilizes all of the same capabilities while providing 100 Mbps mobile speeds and 1 Gbps fixed station speeds. Plus, WiMAX Advanced supports additional devices and broadband wireless access technologies, MIMO, beamforming and radio access technologies for operation within a multi radio access network. WiMAX is managed by the WiMAX forum, a non-profit group that certifies and endorses wireless products that are compatible with the 802.16 standard, these include WiMAX Advanced, AeroMACS and WiGRID.

 

Of course, there are drawbacks to WiMAX, speeds can get slower as the source gets further away. Also, when multiple users are connected at the same time, performance can suffer. WiMAX might never be as popular as Wi-Fi, but there are plenty of benefits that make it a good option to consider.

 

GigE Vision – A Clear Standard

July 19, 2018 at 8:00 AM

 

As big data has gotten bigger and bigger, so have vision applications. GigE Vision is a global interface standard designed to support the transmission of high-speed video and related control data over Ethernet networks that include GigE, 10 GigE and 802.11 wireless networks.

 

This standard was developed using the Gigabit Ethernet communication protocol and provides fast image transfer using readily available Ethernet cables over extended distances. GigE Vision is capable of fast, high-bandwidth transfers of large images in real time at 125 MB/s and up to 100 meters in length. With the use of standard Cat5e and Cat6 cables and connectors, using GigE Vision is cost effective, highly scalable and allows for simple integration by using existing Ethernet infrastructures.

 

Managed by the AIA, a trade association for the machine vision industry, the GigE Vision standard was introduced in 2006 and has since been adopted globally, with most major industrial video hardware and software vendors having developed products that are GigE Vision-compliant. By following the same standard, products from different vendors are all interoperable. This means frame grabbers, embedded hardware interfaces, cameras, video servers, video receivers control applications and management entities can all work together seamlessly using a common Ethernet platform.

 

Much like USB3 Vision, GigE Vision relies on GenICam, a generic programming interface for different types of cameras, to access and control features in cameras and other imaging devices that are compliant. The simplicity of installation and high performance specs of GigE Vision makes it ideal for industrial applications. The standard is also used in telecom, military, data communications and machine vision applications.

 

GigE Vision is currently at version 2.0, which includes non-streaming device control, faster streaming over 10 Gig Ethernet and link aggregation. Version 2.0 is ideal for multi-camera systems and introduced the Precision Time Protocol (PTP) that enables cameras to be activated at the same time and Trigger-over-Ethernet without the need for an I/O cable. It also allows for multi-camera systems to be precisely synchronized, compressed images to be transmitted and enhanced support for multi-tap sensors. With all of its capabilities and benefits, GigE Vision has proven to be a boon in the world of vision applications.

 

Short Range Communications: A to Z

July 12, 2018 at 8:00 AM

 

These days, there is more wireless technology in use than ever before. From phones to toys to industrial automation, wireless devices are being used in all sectors, and for good reason. Wireless technology is portable, easy to install, flexible and eliminates the cost of expensive wiring. With the boom of wireless devices, there has also been a surge of wireless protocols and standards to support all of that technology. These include several short range wireless communication technologies that transmit shorter distances than other long range technologies but still pack a punch, which makes them great for certain applications. Here, we’ll take a look at the long list of short range communication standards and technologies to see how they stack up.

 

ANT+

 

ANT and ANT+ are sensor network technologies used for collecting and transferring sensor data and are maintained by the ANT+ Alliance Special Interest Group. This protocol is a type of personal-area network (PAN) that features remarkably low power consumption and long battery life. It divides the 2.4 GHz band into 1 MHz channels and accommodates multiple sensors. ANT+ is primarily used for short-range, low-data-rate sensor applications such as sports monitors, wearables, wellness products, home health monitoring, vehicle tire pressure sensing and in household items that can be controlled remotely such as TVs, lights and appliances.

 


Bluetooth

 

This popular technology is managed by the Bluetooth Special Interest Group (SIG) and is covered by the IEEE 802.15.1 standard. Originally created as an alternative to cabled RS-232, Bluetooth is now used to send data from PANs and fixed and mobile devices. This plug-and-play technology utilizes the 2.4 -2.485 GHz band and has a standard range of 10 meters, but it can extend to 100 meters at maximum power with a clear path. Bluetooth Low Energy has a simpler design and is a direct competitor of ANT+, focusing on health and medical applications.

 

 

 EnOcean

 

This system is self-powered and able to wirelessly transmit data by using ultra-low power consumption and energy collecting technology. Instead of a power supply, EnOcean’s wireless sensor technology collects energy from the air.  Energy from the environment, such as light, pressure, kinetic motion and temperature differences, is harvested and used to transmit a signal up to 30 meters indoors using a very small amount of energy. In the US, EnOcean runs on the 315 MHz and 902 MHz bands. In Europe, it uses the 868 MHz frequency band and in Japan, it operates on the 315 MHz and 928 MHz frequency bands.

 

 

  FirstNet

 

The FirstNet organization is an independent government authority dedicated to providing specialized communication services for first responders. The FirstNet network is the first high-speed, nationwide, wireless broadband network dedicated to public safety. With this network, all emergency workers are able to use one interoperable LTE network devoted solely to keeping them connected. FirstNet uses the 700 MHz spectrum available nationwide and aims to solve interoperability challenges and ensure uninterrupted communication to enhance the safety of communities and first responders.

 

NFC


Near-Field Communications (NFC) is an ultra-short-range technology created for contactless communication between devices. It is often used for secure payment applications, fast passes and similar applications. Operating on the 13.56 MHz ISM frequency, NFC has a maximum range of around 20 cm, which provides a more secure connection that is usually encrypted. Many smart phones already include an NFC tag.

 

 

RFID


Radio-frequency identification (RFID) uses small, flat, cheap tags that can be attached to anything and used for identification, location, tracking and inventory management. When a reader unit is nearby, it transmits a high-power RF signal to the tags and reads the data stored in their memory. Low frequency RFID uses the 125-134 kHz band, high frequency RFID uses the 13.56 MHz ISM band and Ultra-high frequency RFID uses the 125-134 kHz band. With multiple ISO/IEC standards available for RFID, this technology has replaced bar codes in some industries.

 

 

ZigBee


ZigBee is the standard of the ZigBee Alliance. The path of a message in this network zig-zags like a bee, hence the name. It is a software protocol that uses the 802.15.4 transceiver as a base and is meant to be cheaper and simpler than other wireless personal area networks (WPANs), like Wi-Fi or Bluetooth. ZigBee is able to build large mesh networks for sensor monitoring, handling up to 65,000 nodes, and it can also support multiple types of radio networks such as point-to-point and point-to-multi-point. It has a data rate of 250 kB/s and can transfer wireless data over a distance of up to 100m. ZigBee can be used for a range of applications including remote patient monitoring, wireless lighting and electrical meters, traffic management systems, consumer TV and factory automation, to name a few.

 

 

Where short range communication lacks in distance, it more than makes up for in versatility and capability, and as we can see there are plenty of options available to support all of your short range application requirements.

 

Case Study: Sony Biotechnology

June 28, 2018 at 8:00 AM

 

Sony Biotechnology is an award winning, state-of-the-art medical manufacturing company that has delivered innovative, high-quality product solutions to the global market for 23 years. Their main focus is the flow cytometry market, designing and building equipment that sorts human and animal cells.

 

One of Sony Biotechnology’s new products needed a high-quality, shielded Ethernet cable that could meet Sony’s design requirements. The cable needed to have stranded conductors, color coded conductors, a tinned copper braided shield that could be soldered to and a low-smoke zero-halogen jacket to meet environmental and safety standards. This design-specific cable also had to meet project deadline and cost constraints.

 

Sony purchased cable from another manufacturer, but they ran into several issues that made the cable unusable. For example, during testing, the cable would not hold the solder to attach to the grounding wire. Plus, the individual conductors were not color coded, which made termination very challenging and time consuming. After asking for the supplier’s cable specs, it was also discovered the cable’s braided shield had been made with aluminum instead of tinned copper.

 

Since this manufacturer’s cable failed Sony’s testing and requirements, Sony consulted with L-com’s product management team to find a solution. Our product team was able to provide them with an off-the-shelf solution that met all of Sony’s needs, the TRD855DSZ-7 Ethernet cable. This double-shielded, 26 AWG cable with a LSZH jacket not only met all of Sony’s requirements, but it was available immediately and met Sony’s price target.

 

 

To read the entire case study click here.

 

© L-com, Inc. All Rights Reserved. L-com, Inc., 50 High Street, West Mill, Third Floor, Suite 30, North Andover, MA 01845