Readers’ Choice -Top Blog Posts of 2018

December 20, 2018 at 8:00 AM

 

Our goal for this blog is to provide interesting and informative content for our readers. So we always enjoy taking a look back at the end of the year to see what the most popular posts were. To make sure you didn’t miss anything, here’s a list of the most read posts of 2018. We hope to see you back in 2019!

 

 

1.      Cat6 Cable: Shielded vs. Unshielded


Category 6 Ethernet cable is designed to provide high speed data rates, but how do you decide between shielded or unshielded? Here, we compare them side by side so you can choose which will work best for your application. Read more.

 

 

2.      10 of the Worst Cabling Nightmares

 

We pride ourselves on our commitment to provide the best connectivity solutions for our customers, helping them to manage their data centers. So it always comes as a shock when we see cabling infrastructure that is a complete nightmare. This post has some of the worst offenders we’ve seen on the web. Read more.

 

 

3.      The Advantages and Disadvantages of Shielded Ethernet Cable

 

When it comes to shielded Ethernet cable, there are pros and cons. This post takes a look at both the good and the bad to help you weigh your options. For example, shielding can offer protection from EMI/RFI but its weight and limited flexibility means it’s not ideal for every application. To help decide if shielded Ethernet cable is right for your installation, read the post.

 

 

4.      75 Ohm vs. 50 Ohm – Coaxial Comparison

 

Ohm may sound like something you’d say while meditating, but when it comes to coaxial cables, it is actually a unit of resistance. Ohms measure the impedance within the cable. Impedance is resistance to the flow of electrical current through a circuit. To see how 75 Ohm and 50 Ohm compare, read our post.

 

 

5.      Good Vibrations: Vibration-Proof USB Connectors


Universal Serial Bus (USB) is one of the most widely used technologies to connect and power devices. One fundamental flaw of USB is its sensitivity to vibration, causing the connector to dislodge. In this post we show you some solutions to keeping your USB connected. Read more.

 

Evolution of the Data Center

August 2, 2018 at 8:00 AM

 

Just as computers, phones and everything else in our world has made advancements over the years, so have data centers. Data centers play a critical role in networking and have evolved to allow businesses better access to their data with improved usability while being easier to manage. Traditionally, data centers were only as good as the physical space they took up, they were restricted by server space and having enough room for hardware to be stored. With today’s technological advancements, they are less space-centric, and more focused on the cloud, speed and flexibility. Here, we’ll take a look at the evolution of the data center, from inception to the realm of future possibilities.

 

The Early Days

 

The earliest data centers were large rooms filled with computers that were difficult to operate and maintain. These primordial behemoths needed a special environment to keep the equipment running properly – equipment that was connected by a maze of cables and was stacked on racks, cable trays and raised floors. Early data centers also used a large amount of power and generated a lot of heat, so they had to be cooled to keep from overheating. In 1964, the first supercomputer, the ancestor to today’s data centers, was built by Control Data Corporation. It was the fastest computer of its time with peak performance of 3 MFlops, sold for $8 million and continued operating until 1977.

 

1970s

 

The 1970s brought the invention of the Intel 4004 processor which allowed for computers to be smaller. And in the 1973, the first desktop was introduced, the Xerox Alto. Although it was never sold commercially, it was the first step toward eliminating the mainframe. The first LAN was brought to life in 1977 in the form of ARCNET, which allowed computers to connect to one another with coax cables that linked to a shared floppy data storage system.

 

1980s

 

The personal computer (PC) was born in 1982, with the introduction of the IBM model 5150. This new, smaller computer was a far cry from the expensive and expansive mainframes that were hard to cool. PCs allowed organizations to use desktop computers throughout their companies much more efficiently, leading to a boom in the microcomputer industry. Plus, in 1985, Ethernet LAN technology was standardized, largely taking the place of ARCNET. 

  

1990s

  

The 1990s started with microcomputers working as servers and filling old mainframe storage rooms. These servers were accumulated by companies and managed on premise, they were knows as data centers. The mid-90s saw the introduction of the Internet, and with it came a demand for faster internet connections, increased online presence and network connectivity as a business requirement. To meet increased demands, new, larger scale, enterprise server rooms were built with data centers that contained hundreds or thousands of servers working around-the clock.  In the late 1990s, virtualization technology originally introduced in the 80s was revisited with a new purpose in the form of a virtual workstation, which was comparable to a virtual PC. Enterprise applications also became available for the first time through an online website.

 

2000s 

 

By the early 2000s, PCs and data centers has grown exponentially. New technology was quickly emerging to allow data to be transmitted easier and faster. The first cloud-based services were launched by Amazon Web services, which included storage, web services and computation. There was also a growing realization of the power required to run all of these data centers, so new innovations were being introduced to help data centers be more energy efficient. In 2007, the modular data center was launched. One of the most popular was from Sun Microsystems, which has 280 servers in a 20-foot shipping container that could be sent anywhere in the world. This offered a more cost effective way for corporate computing, but also refocused the industry on virtualization and ways to consolidate servers.

 

2010s

 

By the 2010s, the Internet had become engrained in every part of day-to-day life and business operations. Facebook had become a main player and began investing resources in trying to find ways to make data centers more cost and energy efficient across the industry. Plus, virtual data centers were common in almost 3/4 of organizations and over 1/3 of businesses were using the Cloud. The focus then shifted to software-as-a-service (SaaS), with subscription and capacity-on-demand being the main focus, instead of infrastructure, software and hardware. This model increased the need for bandwidth and the creation of huge companies providing access to cloud-based data centers, including Amazon and Google.

 

Today, the Cloud appears to be the path we are headed on, with new technology being introduced and the implementation of the IoT becoming more of a reality every day. We’ve definitely come a long way from the first gigantic mainframe data centers, one can only imagine what the next 60 years of innovation will bring.

 

How Data Centers Can Prepare for a Natural Disaster

May 31, 2018 at 8:00 AM

 

We’ve learned that the Cloud isn’t actually floating in the sky, it’s actually thousands of data centers full of servers that store and transmit all of the data we use every day. But what happens when a data center is affected by a natural disaster? In this post, we’ll take a look at what defensive strategies are used to keep our data safe and our cloud aloft, even in the worst circumstances.

 

From blizzards and hurricanes to floods and fires, we seem to have seen a large number of natural disasters in recent history. Fortunately, data centers are prepared with plans to maintain Internet connections and keep your data accessible even in the worst conditions. By having preparedness plans in place, staff willing to stay at their posts and generators to provide power, key data centers can withstand record-breaking hurricanes and even serve as evacuation shelters for citizens and headquarters for law enforcement.

 

Here are some ways data centers can prepare for natural disasters:

 

Make a Plan

The best line of defense is having a good offense - having a plan in place, testing that plan, having a plan B for when that plan fails and then being ready to improvise. When it comes to Mother Nature, even the most prepared have to roll with the punches as things change.

 

Build a Fortress

The ideal structure to house your data center will be impenetrable. That might be too much to ask, but newly constructed buildings can be made to withstand earthquakes, flood, fire or explosion. The addition of shatterproof/explosive-proof glass, reinforced concrete walls and being in a strategic location outside flood zones can also provide an extra layer of protection.

 

These additional precautions might not be possible in older buildings, but there are still steps you can take to help protect your data center:

 

·       Move hardware to a safer location if possible:

    - Ideally, a data center should be away from windows, in the               center of a building and above ground level

    - Higher floors are better, except in an earthquake zone, then               lower floors are safer

·       Install pumps to remove water and generators to keep the pumps          running

·       If there are windows, remove objects that could become airborne

·       Fire extinguishing systems should be checked regularly

 

Redundancy is Key

Hosting all data in one place is opening the door for disaster. A safer option is to host it in multiple locations at redundant centers that can back each other up if disaster strikes one or more facilities. These centers don’t have to be on opposite ends of the world, but putting them in different geographic regions is probably the safest bet. They should be far enough apart that one disaster won’t take them all out.

 

Back That Data Up

If there’s no time to back up data to the Cloud, making a physical backup of the data and sending it with someone who’s evacuating is a good second option.

 

Natural disasters are unavoidable, and the most important asset to keep safe is always the people working inside the data center, but with a plan in place to keep Mother Nature at bay, you might be able to salvage the data center too.

 

Why DC is Making a Comeback in Data Centers

April 12, 2018 at 8:00 AM

 

It’s safe to say that the environment has become a hot button issue lately. We are having more conversations about how people are affecting the environment and what we can do to lessen those effects. That said, did you know that data centers create an ecological footprint as big as the airline industry? In fact, in the last few years, data centers have consumed more power than the entire UK. Needless to say, data centers are energy consuming monsters. But what can be done to stop all of that energy usage? There’s no way we’re going to disassemble all of our data centers. That’s where DC power comes in.

 

At the end of the 19th century, there was the first ever battle for technology standard supremacy: alternating current (AC) vs. direct current (DC). In the end, AC came out on top. Though there are many DC devices still used today, AC has long reigned supreme as the primary standard for power. But now that we are rethinking energy usage, use of DC is again on the rise, especially high-voltage direct current (HVDC) which allows for low-loss bulk transmissions of electrical power over long distances.

 

Massive amounts of unused electricity disappear throughout data center systems. Energy is lost in cooling, air conditioning, processors and the distribution of power. Traditionally, data centers transform AC voltage into DC power and then convert it back into AC. The problem is that during each conversion from AC to DC and back to AC, energy disappears in the form of heat loss. This energy loss gets worse as the heat loss is cooled and discharged, which requires more components that create more heat.

 

 

Eliminating the conversion from DC to AC and using DC voltage in data centers might be a perfect solution. If a server is already using DC power, it can continue to be used throughout the chain and any incoming AC voltage can be converted to DC for distribution. Some studies have shown that avoiding multiple transformations and conversion can make power supply to the server 10% more power efficient.  Plus, the architecture of a DC power chain is made up of considerably fewer components than AC, which means less space needed for electrical infrastructure. Systems with fewer components can be installed faster, create fewer errors and are easier to maintain, making them more reliable and cheaper in the long run.

 

Thus, making the change to DC power could eliminate much of this power loss, save energy, improve the environment and help businesses, but it would require an industry-wide shift in perspective. Worldwide, data centers in several countries already use DC technology, but there are no standards for its use. Efforts are being made to standardize DC power, and while not all have been successful thus far, we might start to see a gradual shift from AC to DC. Skills and knowledge of DC technology will also need to be developed if this were to become the popular standard. Plus, there would need to be an increase in availability of DC components. DC systems do still have some heat loss, and they require air conditioning, fire protection, building control and access control systems. The biggest obstacle will be getting people to recognize that using DC in data centers could have enormous benefits. Once the perception begins to change, people will be more likely to switch to DC data centers and adopt new standards to make data centers more economical and environmentally friendly.

 

Industry Overview: Enterprise Networks

February 15, 2018 at 8:00 AM

 

In this week’s post we will take a look at the main areas or segments of an Enterprise communications network.

 

Data Centers/Main Distribution Frame (MDF)

 

Data centers (sometimes referred to as main distribution frames) are a crucial part of many businesses and institutions. The MDF is where the connection from the Telco or carrier typically enters the building. Many times the MDF is located in the basement or first floor of a building. The MDF usually houses server racks, patch panels, Ethernet routers and switches and uninterruptable power supplies (UPS). In a multi-floor building, the MDF is usually attached to the floor(s) above it via fiber optic cabling supporting many Gigabits of throughput per second to offer voice, video and data services to hundreds of users in the building.

 

Here is an example of a typical data center configuration:

 


Intermediate Distribution Frame (IDF)

 

An Intermediate Distribution Frame (IDF) is the area where the MDF connects to on each floor of a building. Depending on the size of the building and number of users, the IDF can be thought of a small MDF used to serve users on the floor it is located on.

 

The IDF is typically made up of an equipment rack(s), fiber and copper cabling, patch panels, Ethernet switches and UPS systems

 

Here is an example:

 

L-com stocks a wide range of components and solutions to keep your enterprise network connected. To read and download our Enterprise Network Overview PDF click here.

 

© L-com, Inc. All Rights Reserved. L-com, Inc., 50 High Street, West Mill, Third Floor, Suite 30, MA 01845