Endpoint Protection

 View Only

The Soft Underbelly: Attacking the Client 

Jan 27, 2004 02:00 AM

by Tom Vogt

Since at least 1998 (see Avolio), security experts have warned that a perimeter defence alone is insufficient, and the vast majority of networks are extremely vulnerable as soon as the firewall, proxy service or physical security layer at said perimeter has been breached.

The situation today has not changed much since 1998. Most security initiatives still concentrate on the firewalls and other border devices, and virus defence is the only area where a low level of penetration has been achieved in securing each individual client.

None of this is news, though the extent of the danger is beginning to surface slowly, as more and more security experts point to the problem. Nevertheless, I believe strongly that the threat is still being underestimated, even by those who condemn perimeter defences.

Damage assessment

Loss of machines

I have recently pointed out in [Vogt] that even a large corporate network can be destroyed in minutes, once an entry point has been gained and malicious code of sufficient quality has been brought inside. [Hanson] elaborated and strengthened this point using past worms as the example.

The entire point of this analysis is that any breach of the perimeter is potentially fatal, no matter how small it is, if the interior network is soft. In my paper, a single compromised machine brought down 98% of a class B network in less than a minute. I know of no current or under-development defence systems that could defend against this kind of attack. Most importantly, as the worm is saturating the network, any kind of central defence mechanism will be slowed down by the very attack it should be fighting.

The entire scenario is a typical one-vs-many problem. A centralized defence against a clever worm optimized for private networks will simply be overwhelmed by the sheer number of attackers, which are multiplying at dazzling speed. As with any disease, stopping it early is the only realistic defence. Immunization of the potential victim is the most reliable.

There are two kinds of damage in all of these scenarios. One is the loss of information. Even with a good and working backup strategy, work will be lost. At the very least, work done between the last offline backup and the time of the attack (assuming that online backups will be wiped as well if any machine on the network has write access to the online archive). In many real life scenarios, the loss of information will be much higher, as backup strategies are often found to be imperfect, users save data locally despite multiple warnings, and other effects.

The second, and more disastrous type of danger is the potential loss of the entire network. It has already been shown to be not only possible but technically feasible, yet many organizations believe that it simply will not happen and thus will be caught off-guard. Not only will this cause confusion and demoralization within the organization, there are also very few organizations that are prepared for such a catastrophic scenario, and the reinstallation of hundreds or thousands of machines will take considerable time during which productivity will be very much reduced as users can not access data and many processes that rely on the IT infrastructure cannot be used.

Another thing to consider when significant damage occurs is that due to the lack of a structured rebuilding process, political issues will bog down the process further as management and departments fight (possibly quite friendly) over who gets to be restored first and which data and servers are the most critical.

Manipulation of data

Other attacks on the local network are less destructive, and manipulate data instead of destroying it. These are more sophisticated attacks and very few have made it to headlines. The jailing of Zhao Zhe, who manipulated stock prices in China is one of these few examples [NYT].

Manipulated data is not usually a killing blow to any organization, but more like a wound that keeps bleeding if not closed up. The damage is limited, but ongoing. Manipulated calculations can show a product as profitable even though the company is in fact losing money constantly on it. Manipulated business offers can drive a company towards a specific partner they would not have chosen if they had correct information. Bills, invoices and orders offer other obvious targets for manipulation and result in a direct financial loss.

Disruption of work

The disruption of work is a side effect of many attacks, but also a possible attack in itself. In the later case, it can be extremely subtle. While an inability to access the network (as a side effect of a destructive attack) or strong slowdowns in the network infrastructure will be easily visible, pure disruption attacks through the use of malware could, for example, modify systems so that they fail more often, reboot at random but lengthy intervals, or corrupt vital but rarely accessed data on disk. Many of these disruptions will be blamed on the computer, the OS or the IT department instead of being reported, and thus can stay undetected for a long time. Depending on the particular skill sets in the IT department, even investigations might not reveal the true problem, and may result in hardware being exchanged or systems being reinstalled. Not every good system administrator is trained in spotting and identifying the activities of malicious code.

Disruptions will reduce productivity, resulting in a loss of work and money, or a rise in costs as the reduction is countered with more manpower or long hours.

Types of attacks

Attacking the clients

Currently, in most corporate infrastructures the servers are protected fairly well, while the clients are very much open to attack. The clients are protected by firewalls, proxy servers, client-based virus scanners and virus scanners on the mail-server, and so on. However, in my experience the client machines themselves are very seldom even at the current patch level, and almost never hardened or configured with local firewalls, host IDS or other defence systems.

Obviously, most IT managers believe an attacker would go for the servers, since they hold the data. Just as obviously, any attacker worth considering would go for the weakest link. The data might rest on the servers, but the clients are working with the data and very often have considerable access levels. Wiping the data from a file server is just as easy to do from a compromised client with the proper access levels as it is from the file server itself. A database can just as easily be dropped or modified from the DBA workstation as from the DB server.

There are a few attacks that can only be conducted from the server machines, and from within the core infrastructure, but many more who do not rely on breaking into these usually moderately secure machines.

Attacking the users

On the other hand, there are many interesting attacks that are best done on the clients. These include information gathering (passwords, maybe credit card numbers) as well as social engineering (modifying websites or e-mails, either incoming or outgoing). These are creative attacks, often subtle and not overtly destructive, but one with possibly serious repercussions. Imagine, for example, a hypothetical payload that doubles any number prefixed by a dollar symbol ($) on all outgoing (unencrypted) mail, while halving all those on incoming mails. When your mail comes back quoted, a would see the quoted number as he wrote it yet the receipient would see it otherwise. This sort of malicious payload would throw an interesting wrench into negotiations, possibly killing a good number of deals before it is found, or making the sender commit to deals that will be costly yet difficult to bail out of.

Dropping random mails would be another disruption of office communication, and a difficult one to catch. I am not talking about a global drop here, more like a 25% chance, for example, that any mail sent to more than five people will not be delivered to one of them (randomly chosen). This is another potential attack that doesn't destroy the gears, but puts sand into them, in this case by disrupting the information flow. It might be the proper attack vector for a competitor or a disgruntled ex-employee who can't risk being discovered through a full-size investigation in a clear case with tangible damages.

Finally, many Windows users have become used to occasional crashes or odd behaviour. A payload that simply causes more of these crashes, especially at critical times would not be recognized as an attack for a long time. For example, this could be realized as a 10% chance to crash every time a document is saved, crashing just before the save or maybe corrupting the saved file as well.

In all these cases, it is not so much the technical infrastructure that is being attacked, as it is the users and their expectations that are being exploited. One could argue that it is just another type of attack on the weakest link.

 

Attacking other devices

The corporate network, however, does not consist of computers alone. [Phenoelit] for example has shown how other devices such as printers can be used to store data or otherwise work for the attacker. Since they are almost never updated in any corporation, they are usually the weakest link, and as such prime targets for an attack.

It is true that many devices would not make for a high-quality attack vector. Depending on the type of device, the ability to run code on it may be very limited. However, some devices, such as the more recent HP printers, offer the ability to run almost any code an attacker could need. Additionally, modern office copier/printers often have large hard drives, fast processors, run embedded versions of Unix or Windows yet have minimal security. It is not uncommon for these devices to be compromised and turned into warez servers or more malicious use, unbeknownst to IT staff. Moreover, these devices are one of the last places that will be checked if an intrusion is suspected. It is, in a way, a good hideout and staging area.

Physical attacks

Having seen many switch rooms and computing centers, I confidently declare that an attacker with physical access to any of these and 15 minutes of undisturbed time can create enough of mess that it will take a full day to undo the damage. Switching a bunch of connections around, taking the documentation with him, unplugging every cable from a switch or two and maybe physically destroying one or two pieces of equipment will almost always make sure the network technicians are in for a difficult time.

The same is true of server systems, where often spare parts for critical (or believed to be fail-safe) components are not available, or only in minimal numbers. Likewise, information gathering is so much easier if the attacker can walk out with the entire hard disk. Fortunately, physical security is usually good at most companies, and it often takes a good social engineer to get the access necessary for any attacks that require physical access.

Perimeter leaks

I will not say much about WLAN security and its smaller sisters such as bluetooth or the entire area of TEMPEST [note 1]. They need to be included here for completeness, but many others have written extensively on the subject.

In the same vein, access to the corporate network via ethernet ports in the lobby and other common fooleries still exist in many organizations. The same is true for direct dialup access to client machines. I am not talking about the official dialup pool here, but about the modem that someone (perhaps an executive officer) plugged into both his desktop machine and the telephone network, and then installed PC Anywhere. All of these are old topics, so I shall let them rest here and return to more interesting areas.

Attacking the security infrastructure

Firewalking (see [Goldsmith and Spitzner) is an old technique that demonstrates the interesting approach of using the security infrastructure as an attack tool. Alternatively, the recent bzip2 bombs in the wild during the writing of this paper [note 2] are another example, again a copy of a very old technique dating back to at least BBS, pre-Internet times.

What these and other methods that attack the infrastructure share is the fact that the security systems themselves are the targets, and that through them information is gathered or service is disrupted. In other words: the very systems that are meant to protect the network can be turned against it. With the mailserver out of service because the virus checker has been "bzip2-bombed", mail flow stops. Without a centralized virus checker, this would not have happened, yet very few would consider the delivery of mail without scanning for viruses on the server. There are many other examples of these techniques. Automated countermeasures are especially susceptible to attacks of this kind. Every time a device reacts by blocking the unauthorized access, an attacker may have an option to fake or intentionally create these blocks. Port security or firewalls that automatically block out IP addresses based on some rules [note 3] are examples. Spoofing IP or MAC addresses allows an attacker to inject blocks for systems he wants blocked out, or just blanket the device with blocks so that it disrupts the service it should be providing.

This is by no means an invitation to disable security systems. The virus checker is certainly required if you run any Windows systems on your network, and nobody sane would advocate doing away with the firewall. Nevertheless, the presence of security systems is not identical to the presence of security. A firewall, filtering gateway, IDS system or honeypot run by someone with not enough technical knowledge may be more dangerous to the organization than its absence as poorly configured security products are easy targets. Firewalls have been used as break-in points. IDS systems are useful to keep security people busy analyzing data while the attacker enjoys a stroll through the network. To a true hacker, every system, even if it was designed to hamper him, is a tool that can also be used.

Attacking security assumptions

This is an entire field in itself. Too much security is based on assumptions that are not known to be true, or that are even known to be untrue. NAT is an example. NAT is not a security feature, yet it is often used as such, and NAT gateways are commonly used instead of firewalls, under the misguided assumption that they protect the machines inside the private network.

MPLS, and many other tunnel mechanisms are other examples where routing is believed to secure traffic, while the hard truth is that only end-to-end encryption can assure data integrity. This is not the fault of MPLS - it is merely too often being sold as something it is not. There are many other "MPLSs" in the security market today.

Summary

The vast majority of corporate networks rely on perimeter defence as their primary security feature. Once inside, an attacker seldom has trouble taking over as much of the network as he likes.

Insecure client machines are primary targets, and can not adequately be protected by border firewalls, a mail server's anti virus software or physical walls alone, and client machines are almost always granted higher access levels than they strictly require. The security of any given system is always that of the weakest link.

The technology exists to mitigate these risks but it is complex, seldom-used and impacts the "user experience". In other words, it makes it harder to work with the computer systems. Hardening each client is a non-trivial task, and the administration of secure operating systems with features such as RBAC or MAC requires skills that too few administrators have. The walls around our cities are high and strong. Inside, we are still building wooden huts.

Conclusion

In order to control the risks outlined in this article, corporations will have to rethink their internal security strategy, or in some cases, create one that didn't previously exist. For the moment, there are a few key methods that can be applied immediately, in order to reduce risk and eliminate many known weak spots:

 

  1. Every machine and device on the network needs a local firewall or ACL rules.
  2. Any and all data transfers should be encrypted.
  3. Any and all logins or authorization procedures should be encrypted, and passwords stored locally must be encrypted as well
  4. Key-based server authroization should replace address-based server authorization.

All of these problems can be eliminated with existing technology with many different approaches. For example, using SSH instead of telnet and SCP instead of FTP are simple fixes that large organizations have still yet to implement. Using the old services on top of IPSec is another approach. Encrypted file systems are readily available, as are host-based firewalls.

The deployment and maintenance of thse technologies does take time and in some cases specialized knowledge, resulting in higher costs. Whether or not this price is worth paying depends on one factor: how much damage would a serious intrusion into the internal network cost your organization?


Notes

[note 1] see [Wikipedia] for more information this specific example.

[note 2] Anti-flood systems work this way sometimes - send more than X packets a second for Y seconds and you get blacklisted.

[note 3] MultiLevel Security - a ssytem that assigns security classification levels to objects, especially data, and restricts access according to these. Essentailly, a computer-based "sensitivity level" system. First used in Multics, 1975.

[Avolio] Frederick M. Avolio: "Multi-Dimensional Approach to Internet Security". May 1998. - http://www.securitystats.com/reports/MultiDimensional.html

[Vogt] Tom Vogt: "Simulating and optimising worm propagation algorithms", September 2003. - http://web.lemuria.org/security/WormPropagation.pdf

[Phenoelit] Phenoelit Group: Various Papers on embedded systems. - http://www.phenoelit.de/stuff/papers.html

[Hanson] Hanson et. al: "A Comparison Study of Three Worm Families and Their Propagation in a Network", December 2003. - http://www.securityfocus.com/infocus/1752

[NYT] New York Times: "China Jails Computer Hacker for Stock Manipulation", November 1999.

[Wikipedia] Wikipedia entry on TEMPEST - http://en.wikipedia.org/wiki/TEMPEST

[Spitzner] Lance Spitzner on TTL-Firewalking in "Auditing Your Firewall Setup", December 2000 - http://www.spitzner.net/audit.html

[Goldsmith] David Goldsmith, Michael Schi.man: "Firewalking", October 1998 - http://www.packetfactory.net/Projects/firewalk/

This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.