802.3bv - The Power of Plastic Optical Fiber

August 16, 2018 at 8:00 AM

 

In the realm of IEEE standards, 802.3 is bringing a lot to the table for today’s newest innovations. This standard includes several iterations that support ground breaking technology, including 802.3at and 802.3bt that support Power over Ethernet (PoE), 802.3bz that delivers 2.5 and 5 Gbps speeds over copper and now we can add 802.3bv to that list. 802.3bv was developed to support Power over Plastic Optical Fiber (POF) and it’s slated to deliver groundbreaking speed and performance.

 

First, let’s take a look at plastic optical fiber and all of its capabilities. It is a large core, step-index optical fiber capable of speeds of up to 1 Gbps. It is easy to install, cost effective, durable and is an ideal choice for networks reaching 80 meters with infrastructure that connects to switches and/or wall plates. POF will be able to meet the higher bandwidth demands of developing technology and can be used in new applications for home, industrial and automotive networks. Thus, there has been a push for the development of 802.3bv to support all of the possible POF applications.

 

The IEEE 802.3bv standard is an amendment to the 802.3 standard that allows 1000 Mb/s speeds, allowing POF to meet the increased bandwidth needs of those automotive, industrial and home network connectivity applications. 802.3bv delivers Gigabit Ethernet operation over POF and defines physical layer specs for home, industrial and auto industries. With 802.3bv, POF Ethernet networks will have the support of a robust and reliable media option. Automotive applications will have operation over a minimum of 15 meters with 4 POF connections, and distances of at least 40 meters with zero POF connections. Home and industrial applications will be able to achieve lengths of at least 50 meters with one POF connection.

 

There are three physical layer specifications in this amendment, specifically designed for the industries targeted. All use 1000BASE-H encoding over duplex POF cable and red light wavelength transmission.

 

  • ·        1000BASE-RHA – 1000 Mb/s speeds for home network and consumer applications 

 

  • ·        1000BASE-RHB – 1000 Mb/s made for industrial applications

 

  • ·        1000BASE-RHC – 1000 Mb/s rates designed for automotive applications

 

With the development of 802.3bv, yet another layer of power and possibility has been added to the realm of IEEE standards, ensuring that the world of technology has no intention of slowing down.

 

USB 3.1 Gen 1 vs. Gen 2

August 9, 2018 at 8:00 AM

 

Not sure of the difference between USB 3.1 Gen 1 (aka SuperSpeed USB) and USB 3.1. Gen 2 (aka SuperSpeed USB 10 Gbps or SuperSpeed+)? Don’t worry, you’re not alone. USB has rebranded and restructured how it differentiates between the two, leaving many scratching their heads as to which is which. Have no fear though, we’ve got it all figured out and are here to clear it up for you.

 

If you were impressed by the super speeds brought to you courtesy of USB 3.1, then you’re going to be over the moon for USB 3.1 Gen 2. This iteration of USB technology bolsters speeds and delivers additional benefits sure to please all users. As its name suggests, SuperSpeed+ USB increases the data transfer rate from 5 Gbps to 10 Gbps, making it twice as fast as USB 3.1 Gen 1 and on par with first generation Thunderbolt technology. USB 3.1 Gen 2 also uses more efficient data encoding, which not only increases throughput, but also improves I/O power efficiency.

 

Though the maximum cable length is shortened from 5 meters to 1 meter, USB 3.1 Gen 2 maintains the capability of data plus power over one cable and it can support multiple cameras and the USB3 Vision standard. Plus, USB 3.1 Gen 2 increases the power delivery level from 4.5 Watts to an astounding 100 Watts. This standard will also support USB Type-C, USB DisplayPort over Type-C and USB Power Delivery.

 

USB 3.1 Gen 2 is fully backward compatible with existing USB 3.0 software and devices, 5 Gbps hubs and devices as well as USB 2.0 products. In case there’s still some lingering confusion, here’s a handy chart to help compare these two side-by-side.

  

 

USB 3.1 Gen 1

USB 3.1 Gen 2

Data Rate

5 Gbps

10 Gbps

Power Delivery

4.5 W

100 W

Max Cable Length

5 m

1 m

Multiple Cameras

P

P

USB3 Vision

P

P

Data + Power

P

P

 

 

Evolution of the Data Center

August 2, 2018 at 8:00 AM

 

Just as computers, phones and everything else in our world has made advancements over the years, so have data centers. Data centers play a critical role in networking and have evolved to allow businesses better access to their data with improved usability while being easier to manage. Traditionally, data centers were only as good as the physical space they took up, they were restricted by server space and having enough room for hardware to be stored. With today’s technological advancements, they are less space-centric, and more focused on the cloud, speed and flexibility. Here, we’ll take a look at the evolution of the data center, from inception to the realm of future possibilities.

 

The Early Days

 

The earliest data centers were large rooms filled with computers that were difficult to operate and maintain. These primordial behemoths needed a special environment to keep the equipment running properly – equipment that was connected by a maze of cables and was stacked on racks, cable trays and raised floors. Early data centers also used a large amount of power and generated a lot of heat, so they had to be cooled to keep from overheating. In 1964, the first supercomputer, the ancestor to today’s data centers, was built by Control Data Corporation. It was the fastest computer of its time with peak performance of 3 MFlops, sold for $8 million and continued operating until 1977.

 

1970s

 

The 1970s brought the invention of the Intel 4004 processor which allowed for computers to be smaller. And in the 1973, the first desktop was introduced, the Xerox Alto. Although it was never sold commercially, it was the first step toward eliminating the mainframe. The first LAN was brought to life in 1977 in the form of ARCNET, which allowed computers to connect to one another with coax cables that linked to a shared floppy data storage system.

 

1980s

 

The personal computer (PC) was born in 1982, with the introduction of the IBM model 5150. This new, smaller computer was a far cry from the expensive and expansive mainframes that were hard to cool. PCs allowed organizations to use desktop computers throughout their companies much more efficiently, leading to a boom in the microcomputer industry. Plus, in 1985, Ethernet LAN technology was standardized, largely taking the place of ARCNET. 

  

1990s

  

The 1990s started with microcomputers working as servers and filling old mainframe storage rooms. These servers were accumulated by companies and managed on premise, they were knows as data centers. The mid-90s saw the introduction of the Internet, and with it came a demand for faster internet connections, increased online presence and network connectivity as a business requirement. To meet increased demands, new, larger scale, enterprise server rooms were built with data centers that contained hundreds or thousands of servers working around-the clock.  In the late 1990s, virtualization technology originally introduced in the 80s was revisited with a new purpose in the form of a virtual workstation, which was comparable to a virtual PC. Enterprise applications also became available for the first time through an online website.

 

2000s 

 

By the early 2000s, PCs and data centers has grown exponentially. New technology was quickly emerging to allow data to be transmitted easier and faster. The first cloud-based services were launched by Amazon Web services, which included storage, web services and computation. There was also a growing realization of the power required to run all of these data centers, so new innovations were being introduced to help data centers be more energy efficient. In 2007, the modular data center was launched. One of the most popular was from Sun Microsystems, which has 280 servers in a 20-foot shipping container that could be sent anywhere in the world. This offered a more cost effective way for corporate computing, but also refocused the industry on virtualization and ways to consolidate servers.

 

2010s

 

By the 2010s, the Internet had become engrained in every part of day-to-day life and business operations. Facebook had become a main player and began investing resources in trying to find ways to make data centers more cost and energy efficient across the industry. Plus, virtual data centers were common in almost 3/4 of organizations and over 1/3 of businesses were using the Cloud. The focus then shifted to software-as-a-service (SaaS), with subscription and capacity-on-demand being the main focus, instead of infrastructure, software and hardware. This model increased the need for bandwidth and the creation of huge companies providing access to cloud-based data centers, including Amazon and Google.

 

Today, the Cloud appears to be the path we are headed on, with new technology being introduced and the implementation of the IoT becoming more of a reality every day. We’ve definitely come a long way from the first gigantic mainframe data centers, one can only imagine what the next 60 years of innovation will bring.

 

GigE Vision – A Clear Standard

July 19, 2018 at 8:00 AM

 

As big data has gotten bigger and bigger, so have vision applications. GigE Vision is a global interface standard designed to support the transmission of high-speed video and related control data over Ethernet networks that include GigE, 10 GigE and 802.11 wireless networks.

 

This standard was developed using the Gigabit Ethernet communication protocol and provides fast image transfer using readily available Ethernet cables over extended distances. GigE Vision is capable of fast, high-bandwidth transfers of large images in real time at 125 MB/s and up to 100 meters in length. With the use of standard Cat5e and Cat6 cables and connectors, using GigE Vision is cost effective, highly scalable and allows for simple integration by using existing Ethernet infrastructures.

 

Managed by the AIA, a trade association for the machine vision industry, the GigE Vision standard was introduced in 2006 and has since been adopted globally, with most major industrial video hardware and software vendors having developed products that are GigE Vision-compliant. By following the same standard, products from different vendors are all interoperable. This means frame grabbers, embedded hardware interfaces, cameras, video servers, video receivers control applications and management entities can all work together seamlessly using a common Ethernet platform.

 

Much like USB3 Vision, GigE Vision relies on GenICam, a generic programming interface for different types of cameras, to access and control features in cameras and other imaging devices that are compliant. The simplicity of installation and high performance specs of GigE Vision makes it ideal for industrial applications. The standard is also used in telecom, military, data communications and machine vision applications.

 

GigE Vision is currently at version 2.0, which includes non-streaming device control, faster streaming over 10 Gig Ethernet and link aggregation. Version 2.0 is ideal for multi-camera systems and introduced the Precision Time Protocol (PTP) that enables cameras to be activated at the same time and Trigger-over-Ethernet without the need for an I/O cable. It also allows for multi-camera systems to be precisely synchronized, compressed images to be transmitted and enhanced support for multi-tap sensors. With all of its capabilities and benefits, GigE Vision has proven to be a boon in the world of vision applications.

 

Case Study: Sony Biotechnology

June 28, 2018 at 8:00 AM

 

Sony Biotechnology is an award winning, state-of-the-art medical manufacturing company that has delivered innovative, high-quality product solutions to the global market for 23 years. Their main focus is the flow cytometry market, designing and building equipment that sorts human and animal cells.

 

One of Sony Biotechnology’s new products needed a high-quality, shielded Ethernet cable that could meet Sony’s design requirements. The cable needed to have stranded conductors, color coded conductors, a tinned copper braided shield that could be soldered to and a low-smoke zero-halogen jacket to meet environmental and safety standards. This design-specific cable also had to meet project deadline and cost constraints.

 

Sony purchased cable from another manufacturer, but they ran into several issues that made the cable unusable. For example, during testing, the cable would not hold the solder to attach to the grounding wire. Plus, the individual conductors were not color coded, which made termination very challenging and time consuming. After asking for the supplier’s cable specs, it was also discovered the cable’s braided shield had been made with aluminum instead of tinned copper.

 

Since this manufacturer’s cable failed Sony’s testing and requirements, Sony consulted with L-com’s product management team to find a solution. Our product team was able to provide them with an off-the-shelf solution that met all of Sony’s needs, the TRD855DSZ-7 Ethernet cable. This double-shielded, 26 AWG cable with a LSZH jacket not only met all of Sony’s requirements, but it was available immediately and met Sony’s price target.

 

 

To read the entire case study click here.

 

© L-com, Inc. All Rights Reserved. L-com, Inc., 50 High Street, West Mill, Third Floor, Suite 30, North Andover, MA 01845