Security and latency

Just a Minute

Article from Issue 154/2013
Author(s):

In a high-performance environment, you want speed as well as security. Kurt looks at some approaches to security that won't slow things down.

High-performance computing (HPC) used to be all about buying the fastest system you could afford. It was simple: You had a problem, and you spent as much money as you could to buy a single fast system. Then, some clever people (Thomas Sterling and Donald Becker at NASA) came along and pointed out that computers had gotten quite a bit faster, and if they worked together, they could solve certain types of problems just as quickly for much less money. So, these folks took some commodity hardware and Ethernet gear, wrote some software to tie it all together, and created the Beowulf cluster [1].

Since then, the TOP500 supercomputer list [2] has changed from a list of systems running specialized software from Cray and friends – with a handful of distributed systems running Linux at the bottom – to a list of almost completely Linux-based systems (95.2% of systems and 97.4% of performance as of June 2013), while silicon speeds have basically topped out at 5GHz. Sure, you can go faster, but it's more expensive; going horizontal is much more cost effective.

All of these developments have made HPC vastly more affordable. The technologies and software designed and built to allow commodity hardware to handle large loads also work very well at a smaller scale. This means you can build a solution at a small scale (e.g., using Hadoop, memcached, MongoDB, CouchDB, MapReduce, etc.) and simply add nodes to scale up as necessary. You don't need to replace the system or make significant changes to the software, you can just add more systems running the exact same software.

Discuss Amongst Yourselves

Such systems are built using commodity hardware, and although Ethernet has gotten a lot faster (gigabit network is standard, 10Gb is becoming affordable), it hasn't gotten a lot faster when it comes to latency. Typical Gigabit Ethernet is slightly less than a millisecond (so less than one thousandth of a second), which is not bad.

But with TCP/IP, you need to do round trips, and initial setup takes a few packets before you can start sending data. So, when you want to get data from a remote system and you need to establish a TCP/IP connection, you can eat up multiple milliseconds on the network traffic alone. This is nothing, however, compared with the delays that can be caused by authentication (especially if you use a central authentication server). So, you want security, but you also want speed. The question is: What trade-off do you make?

No Authentication

The fastest authentication is, of course, no authentication: No code to process, no server to contact, no credentials to pass around and verify. This approach does, however, open up some significant risks. If you decide to use no authentication on a server, minimally, you will need to firewall the service to trusted clients. Assuming these client systems do not have local users or run user-supplied code, you will be relatively safe from attackers – until, of course, they do get in. At that point, they will be able to spread through your infrastructure very quickly. The reality is that, in most systems, you have strong perimeter security and very little, if any, internal security, so this might be an acceptable trade-off.

Authenticate the Network/Traffic

One strategy is to move authentication to the network layer. If you have systems with no user accounts, or a trusted set of users (e.g., a computer cluster), or a series of specialized applications that don't talk to untrusted users or data sources, then it might make sense to authenticate systems at the system level. Using 802.1x, you can force systems to authenticate to the network switches before the network port is enabled; however, I can't imagine many situations where you have a trusted set of systems where people plugging random systems in is a problem.

IPsec or other VPN software, such as OpenVPN, is another option, but again, if you have a highly trusted set of systems, they are likely behind a firewall, so you don't need to worry about this. If you choose this approach, make sure that external access is heavily limited, and I don't mean just network ports – I mean any external access – data that gets passed in, DNS lookups, everything.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Security Lessons: TUF

    Downloaded software can be compromised in several ways. You need a software update system that handles various attacks and provides end-to-end signing of the data. TUF can help.

  • Safe Messaging with TLSA

    Decoupled application design gets in the way of secure communication, but a little known feature of DNS can provide message security.

  • Security Lessons: DNSSEC

    One of the largest holes in the Internet is finally plugged.

  • Security Lessons: Fixing SSL

    We look at some new approaches to certificate verification.

  • HTTPS Proxy

    How do you monitor the network when your client systems are connecting to secure web servers through HTTPS? We’ll show you how to keep watch using the Squid proxy server and share some inventive certificate tricks.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News