LINK DOWNLOAD MIỄN PHÍ TÀI LIỆU "Tài liệu Network Attack and Defense pptx": http://123doc.vn/document/1036197-tai-lieu-network-attack-and-defense-pptx.htm
371
Another approach, if the company uses Unix systems, is to target Sun’s Network
File System (NFS), the de facto standard for Unix file sharing. This allows a number of
workstations to use a network disk drive as if it were a local disk; it has a number of
well-known vulnerabilities to attackers who’re on the same LAN. When a volume is
first mounted, the client requests from the server a root filehandle, which refers to the
root directory of the mounted file system. This doesn’t depend on the time, or the
server generation number, and it can’t be revoked. There is no mechanism for per-user
authentication; the server must trust a client completely or not at all. Also, NFS servers
often reply to requests from a different network interface to the one on which the re-
quest arrived. So it’s possible to wait until an administrator is logged in at a file server,
then masquerade as her to overwrite the password file. For this reason, many sites use
alternative file systems, such as ANFS.
18.2.2 Attacks Using Internet Protocols and Mechanisms
Moving up to the Internet protocol suite, the fundamental problem is similar: there is
no real authenticity or confidentiality protection in most mechanisms. This is particu-
larly manifest at the lower-level TCP/IP protocols.
Consider, for example, the three-way handshake used by Alice to initiate a TCP
connection to Bob and to set up sequence numbers, shown in Figure 18.1.
This protocol can be exploited in a surprising number of different ways. Now that
service denial is becoming really important, let’s start off with the simplest service
denial attack: the SYN flood.
18.2.2.1 SYN Flooding
The SYN flood attack is, simply, to send a large number of SYN packets and never
acknowledge any of the replies. This leads the recipient (Bob, in Figure 18.1) to accu-
mulate more records of SYN packets than his software can handle. This attack had
been known to be theoretically possible since the 1980s, but came to public attention
when it was used to bring down Panix, a New York ISP, for several days in 1996.
A technical fix, the so-called SYNcookie, has been found and incorporated in Linux
and some other systems. Rather than keeping a copy of the incoming SYN packet, B
simply sends out as Y an encrypted version of X. That way, it’s not necessary to retain
state about sessions that are half-open.
Figure 18.1 TCP/IP handshake.
Security Engineering: A Guide to Building Dependable Distributed Systems
372
18.2.2.2 Smurfing
Another common way of bringing down a host is known as smurfing. This exploits the
Internet Control Message Protocol (ICMP), which enables users to send an echo
packet to a remote host to check whether it’s alive. The problem arises with broadcast
addresses that are shared by a number of hosts. Some implementations of the Internet
protocols respond to pings to both the broadcast address and their local address (the
idea was to test a LAN to see what’s alive). So the protocol allowed both sorts of be-
havior in routers. A collection of hosts at a broadcast address that responds in this way
is called a smurf amplifier.
The attack is to construct a packet with the source address forged to be that of the
victim, and send it to a number of smurf amplifiers. The machines there will each re-
spond (if alive) by sending a packet to the target, and this can swamp the target with
more packets than it can cope with. Smurfing is typically used by someone who wants
to take over an Internet relay chat (IRC) server, so they can assume control of the cha-
troom. The innovation was to automatically harness a large number of “innocent” ma-
chines on the network to attack the victim.
Part of the countermeasure is technical: a change to the protocol standards in August
1999 so that ping packets sent to a broadcast address are no longer answered [691]. As
this gets implemented, the number of smurf amplifiers on the Net is steadily going
down. The other part is socioeconomic: sites such as www.netscan.org produce lists of
smurf amplifiers. Diligent administrators will spot their networks on there and fix
them; the lazy ones will find that the bad guys utilize their bandwidth more and more;
and thus will be pressured into fixing the problem.
18.2.2.3 Distributed Denial-of-service Attacks
A more recent development along the same lines made its appearance in October 1999.
This is the distributed denial of service (DDoS) attack. Rather than just exploiting a
common misconfiguration as in smurfing, an attacker subverts a large number of ma-
chines over a period of time, and installs custom attack software in them. At a prede-
termined time, or on a given signal, these machines all start to bombard the target site
with messages [253]. The subversion may be automated using methods similar to those
in the Morris worm.
So far, DDoS attacks have been launched at a number of high-profile Web sites, in-
cluding Amazon and Yahoo. They could be even more disruptive, as they could target
services such as DNS and thus take down the entire Internet. Such an attack might be
expected in the event of information warfare; it might also be an act of vandalism by
an individual. Curiously, the machines most commonly used as hosts for attack soft-
ware in early 2000 were U.S. medical sites. They were particularly vulnerable because
the FDA insisted that medical Unix machines, when certified for certain purposes, had
a known configuration. Once bugs had been discovered in this, there was a guaranteed
supply of automatically hackable machines to host the attack software (another exam-
ple of the dangers of software monoculture).
At the time of writing, the initiative being taken against DDoS attacks is to add
ICMP traceback messages to the infrastructure. The idea is that whenever a router for-
Chapter 18: Network Attack and Defense
373
wards an IP packet, it will also send an ICMP packet to the destination with a prob-
ability of about 1 in 20,000. The packet will contain details of the previous hop, the
next hop, and as much of the packet as will fit. System administrators will then be able
to trace large-scale flooding attacks back to the responsible machines, even when the
attackers use forged source IP addresses to cover their tracks [93]. It may also help
catch large-scale spammers who abuse open relays – relays llowing use by "transit"
traffic, that is, messages which neither come from nor go to email addresses for which
that SMTP server is intended to provide service.
18.2.2.4 Spam and Address Forgery
Services such as email and the Web (SMTP and HTTP) assume that the lower levels are
secure. The most that’s commonly done is a look-up of the hostname against an IP ad-
dress using DNS. So someone who can forge IP addresses can abuse the facilities. The
most common example is mail forgery by spammers; there are many others. For exam-
ple, if an attacker can give DNS incorrect information about the whereabouts of your
company’s Web page, the page can be redirected to another site—regardless of any-
thing you do, or don’t do, at your end. As this often involves feeding false information
to locally cached DNS tables, it’s called DNS cache poisoning.
18.2.2.5 Spoofing Attacks
We can combine some of the preceding ideas into spoofing attacks that work at long
range (that is, from outside the local network or domain).
Say that Charlie knows that Alice and Bob are hosts on the target LAN, and wants to
masquerade as Alice to Bob. He can take Alice down with a service denial attack of
some kind, then initiate a new connection with Bob [559, 90]. This entails guessing the
sequence number Y, which Bob will assign to the session, under the protocol shown in
Figure 18.1. A simple way of guessing Y, which worked for a long time, was for Char-
lie to make a real connection to Alice shortly beforehand and use the fact that the value
of Y changed in a predictable way between one connection and the next. Modern stacks
use random number generators and other techniques to avoid this predictability, but
random number generators are often less random than expected—a source of large
numbers of security failures [774].
If sequence number guessing is feasible, then Charlie will be able to send messages
to Bob, which Bob will believe come from Alice (though Charlie won’t be able to read
Bob’s replies to her). In some cases, Charlie won’t even have to attack Alice, just ar-
range things so that she discards Bob’s replies to her as unexpected junk. This is quite
a complex attack, but no matter; there are scripts available on the Web that do it.
18.2.2.6 Routing Attacks
Routing attacks come in a variety of flavors. The basic attack involves Charlie telling
Alice and Bob that a convenient route between their sites passes through his. Source-
level routing was originally introduced into TCP to help get around bad routers. The
underlying assumptions—that “hosts are honest” and that the best return path is the
best source route—no longer hold, and the only short-term solution is to block source
routing. However, it continues to be used for network diagnosis.
Security Engineering: A Guide to Building Dependable Distributed Systems
374
Another approach involves redirect messages, which are based on the same false as-
sumption. These effectively say, “You should have sent this message to the other
gateway instead,” and are generally applied without checking. They can be used to do
the same subversion as source-level routing.
Spammers have taught almost everyone that mail forgery is often trivial. Rerouting
is harder, since mail routing is based on DNS; but it is getting easier as the number of
service providers goes up and their competence goes down. DNS cache poisoning is
only one of the tricks that can be used.
18.3 Defense against Network Attack
It might seem reasonable to hope that most attacks—at least those launched by script
kiddies—can be thwarted by a system administrator who diligently monitors the secu-
rity bulletins and applies all the vendors’ patches promptly to his software. This is part
of the broader topic of configuration management.
18.3.1 Configuration Management
Tight configuration management is the most critical aspect of a secure network. If you
can be sure that all the machines in your organization are running up-to-date copies of
the operating system, that all patches are applied as they’re shipped, that the service
and configuration files don’t have any serious holes (such as world-writeable password
files), that known default passwords are removed from products as they’re installed,
and that all this is backed up by suitable organizational discipline, then you can deal
with nine and a half of the top ten attacks. (You will still have to take care with appli-
cation code vulnerabilities such as CGI scripts, but by not running them with adminis-
trator privileges you can greatly limit the harm that they might do.)
Configuration management is at least as important as having a reasonable firewall;
in fact, given the choice of one of the two, you should forget the firewall. However,
it’s the harder option for many companies, because it takes real effort as opposed to
buying and installing an off-the-shelf product. Doing configuration management by
numbers can even make things worse. As noted in Section 18.2.2.3, U.S. hospitals had
to use a known configuration, which gave the bad guys a large supply of identically
mismanaged targets.
Several tools are available to help the systems administrator keep things tight. Some
enable you to do centralized version control, so that patches can be applied overnight,
and everything can be kept in synch; others, such as Satan, will try to break into the
machines on your network by using a set of common vulnerabilities [320]. Some fa-
miliarity with these penetration tools is a very good idea, as they can also be used by
the opposition to try to hack you.
The details of the products that are available and what they do change from one year
to the next, so it is not appropriate to go into details here. What is appropriate is to say
that adhering to a philosophy of having system administrators stop all vulnerabilities at
the source requires skill and care; even diligent organizations may find that it is just
too expensive to fix all the security holes that were tolerable on a local network but not
with an Internet connection. Another problem is that, often, an organisation’s most
Chapter 18: Network Attack and Defense
375
critical applications run on the least secure machines, as administrators have not dared
to apply operating system upgrades and patches for fear of losing service.
This leads us to the use of firewalls.
18.3.2 Firewalls
The most widely sold solution to the problems of Internet security is the firewall. This
is a machine that stands between a local network and the Internet, and filters out traffic
that might be harmful. The idea of a “solution in a box” has great appeal to many orga-
nizations, and is now so widely accepted that it’s seen as an essential part of corporate
due diligence. (Many purchasers prefer expensive firewalls to good ones.)
Firewalls come in basically three flavors, depending on whether they filter at the IP
packet level, at the TCP session level, or at the application level.
18.3.2.1 Packet Filtering
The simplest kind of firewall merely filters packet addresses and port numbers. This
functionality is also available in routers and in Linux. It can block the kind of IP
spoofing attack discussed earlier by ensuring that no packet that appears to come from
a host on the local network is allowed to enter from outside. It can also stop denial-of-
service attacks in which malformed packets are sent to a host, or the host is persuaded
to connect to itself (both of which can be a problem for people still running Windows
95).
Basic packet filtering is available as standard in Linux, but, as far as incoming at-
tacks are concerned, it can be defeated by a number of tricks. For example, a packet
can be fragmented in such a way that the initial fragment (which passes the firewall’s
inspection) is overwritten by a subsequent fragment, thereby replacing an address with
one that violates the firewall’s security policy.
18.3.2.2 Circuit Gateways
More complex firewalls, called circuit gateways, reassemble and examine all the pack-
ets in each TCP circuit. This is more expensive than simple packet filtering, and can
also provide added functionality, such as providing a virtual private network over the
Internet by doing encryption from firewall to firewall, and screening out black-listed
Web sites or newsgroups (there have been reports of Asian governments building na-
tional firewalls for this purpose).
However, circuit-level protection can’t prevent attacks at the application level, such
as malicious code.
18.3.2.3 Application Relays
The third type of firewall is the application relay, which acts as a proxy for one or
more services, such as mail, telnet, and Web. It’s at this level that you can enforce
rules such as stripping out macros from incoming Word documents, and removing ac-
tive content from Web pages. These can provide very comprehensive protection
against a wide range of threats.
Security Engineering: A Guide to Building Dependable Distributed Systems
376
The downside is that application relays can turn out to be serious bottlenecks. They
can also get in the way of users who want to run the latest applications.
18.3.2.4 Ingress versus Egress Filtering
At present, almost all firewalls point outwards and try to keep bad things out, though
there are a few military systems that monitor outgoing traffic to ensure that nothing
classified goes out in the clear.
That said, some commercial organizations are starting to monitor outgoing traffic,
too. If companies whose machines get used in service denial attacks start getting sued
(as has been proposed in [771]), egress packet filtering might at least in principle be
used to detect and stop such attacks. Also, as there is a growing trend toward snitch-
ware, technology that collects and forwards information about an online subscriber
without their authorization. Software that “phones home,” ostensibly for copyright en-
forcement and marketing purposes, can disclose highly sensitive material such as local
hard disk directories. I expect that prudent organizations will increasingly want to
monitor and control this kind of traffic, too.
18.3.2.5 Combinations
At really paranoid sites, multiple firewalls may be used. There may be a choke, or
packet filter, connecting the outside world to a screened subnet, also known as a de-
militarized zone (DMZ), which contains a number of application servers or proxies to
filter mail and other services. The DMZ may then be connected to the internal network
via a further filter that does network address translation. Within the organization, there
may be further boundary control devices, including pumps to separate departments, or
networks operating at different clearance levels to ensure that classified information
doesn’t escape either outward or downward (Figure 18.2).
Such elaborate installations can impose significant operational costs, as many rou-
tine messages need to be inspected and passed by hand. This can get in the way so
much that people install unauthorized back doors, such as dial-up standalone machines,
to get their work done. And if your main controls are aimed at preventing information
leaking outward, there may be little to stop a virus getting in. Once in a place it wasn’t
expected, it can cause serious havoc. I’ll discuss this sort of problem in Section 18.4.6
later.
18.3.3 Strengths and Limitations of Firewalls
Since firewalls do only a small number of things, it’s possible to make them very sim-
ple, and to remove many of the complex components from the underlying operating
system (such as the RPC and sendmail facilities in Unix). This eliminates a lot of vul-
nerabilities and sources of error. Organizations are also attracted by the idea of having
only a small number of boxes to manage, rather than having to do proper system ad-
ministration for a large, heterogeneous population of machines.
Chapter 18: Network Attack and Defense
377
Figure 18.2 Multiple firewalls.
Conversely, the appeal of simplicity can be seductive and treacherous. A firewall
can only be as good as its configuration, and many organizations don’t learn enough to
do this properly. They hope that by getting the thing out of the box and plugged it in,
the problem will be solved. It won’t be. It may not require as much effort to manage a
firewall as to configure every machine on your network properly in the first place, but
it still needs some. In [203], there is a case study of how a firewall was deployed at
Hanscom Air Force Base. The work involved the following: surveying the user com-
munity to find which network services were needed; devising a network security pol-
icy; using network monitors to discover unexpected services that were in use; and lab
testing prior to installation. Once it was up and running, the problems included ongo-
ing maintenance (due to personnel turnover), the presence of (unmonitored) communi-
cations to other military bases, and the presence of modem pools. Few nonmilitary
organizations are likely to take this much care.
A secondary concern, at least during the late 1990s, was that many of the products
crowding into the market simply weren’t much good. The business had grown so
quickly, and so many vendors had climbed in, that the available expertise was spread
too thinly.
The big trade-off remains security versus performance. Do you install a simple fil-
tering router, which won’t need much maintenance, or do you go for a full-fledged set
of application relays on a DMZ, which not only will need constant reconfiguration—as
your users demand lots of new services that must pass through it—but will also act as a
bottleneck?
Security Engineering: A Guide to Building Dependable Distributed Systems
378
An example in Britain was the NHS Network, a private intranet intended for all
health service users (family doctors, hospitals, and clinics—a total of 11,000 organiza-
tions employing about a million staff in total). Initially, this had a single firewall to the
outside world. The designers thought this would be enough, as they expected most traf-
fic to be local (as most of the previous data flows in the health service had been). What
they didn’t anticipate was that, as the Internet took off in the mid-1990’s, 40% of traf-
fic at every level became international. Doctors and nurses found it very convenient to
consult medical reference sites, most of which were in America. Trying to squeeze all
this traffic through a single orifice was unrealistic. Also, since almost all attacks on
healthcare systems come from people who’re already inside the system, it was unclear
what this central firewall was ever likely to achieve.
Another issue with firewalls (and boundary control devices in general) is that they
get in the way of what people want to do, and so ways are found round them. As most
firewalls will pass traffic that appears to be Web pages and requests (typically because
it’s for port 80), more and more applications use port 80, as it’s the way to get things to
work through the firewall. Where this isn’t possible, the solution is for whole services
to be reimplemented as Web services (webmail being a good example). These pres-
sures continually erode the effectiveness of firewalls, and bring to mind John Gil-
more’s famous saying that ‘the Internet interprets censorship as damage, and routes
around it.’
Finally, it’s worth going back down the list of top ten attacks and asking how many
of them a firewall can stop. Depending on how it’s configured, the realistic answer
might be about four.
18.3.4 Encryption
In the context of preventing network attacks, many people have been conditioned to
think of encryption. Certainly, it can sometimes be useful. For example, on the network
at the lab I work in, we use a product called secure shell (SSH), which provides en-
crypted links between Unix and Windows hosts [817, 1, 597]. When I dial in from
home, my traffic is protected; and when I log on from the PC at my desk to another
machine in the lab, the password I use doesn’t go across the LAN in the clear.
Let’s stop and analyze what protection this gives me. Novices and policymakers
think in terms of wiretaps, but tapping a dial-up modem line is hard now that modems
use adaptive echo cancellation. It essentially involves the attacker inserting two back-
to-back modems into the link from my house to the lab. So this is a low-probability
threat. The risk of password sniffing on our LAN is much higher; it has happened in
the past to other departments. Thus, our network encryption is really providing a
lower-cost alternative to the use of handheld password generators.
Another approach is to do encryption and/or authentication at the IP layer, which is
to be provided in IPv6, and is available as a retrofit for the current IP protocol as IPsec.
An assessment of the protocol can be found in [290]; an implementation is described in
[782]. IPsec has the potential to stop some network attacks, and to be a useful compo-
nent in designing robust distributed systems, but it won’t be a panacea. Many machines
will have to connect to all comers, and if I can become the administrator of your Web
Chapter 18: Network Attack and Defense
379
server by smashing the stack, then no amount of encryption or authentication is likely
to help you very much. Many other machines will be vulnerable to attacks from inside
the network, where computers have been suborned somehow or are operated by dis-
honest insiders. There will still be problems such as service denial attacks. Also, de-
ployment is likely to take some years.
A third idea is the virtual private network (VPN). The idea here is that a number of
branches of a company, or a number of companies that trade with each other, arrange
for traffic between their sites to be encrypted at their firewalls. This way the Internet
can link up their local networks, but without their traffic being exposed to eavesdrop-
ping. VPNs also don’t stop the bad guys trying to smash the stack of your Web server
or sniff passwords from your LAN, but for companies that might be the target of ad-
versarial interest by developed-country governments, it can reduce the exposure to in-
terception of international network traffic. (It must be said, though, that intercepting
bulk packet traffic is much harder than many encryption companies claim; and less
well-funded adversaries are likely to use different attacks.)
Encryption can also have a downside. One of the more obvious problems is that if
encrypted mail and Web pages can get through your firewall, then they can bring all
sorts of unpleasant things with them. This brings us to the problem of malicious code.
18.4 Trojans, Viruses, and Worms
If this book had been written even five years earlier, malicious code would have mer-
ited its own chapter.
Computer security experts have long been aware of the threat from malicious code,
or malware. The first such programs were Trojan horses, named after the horse the
Greeks ostensibly left as a gift for the Trojans but that hid soldiers who subsequently
opened the gates of Troy to the Greek army. The use of the term for malicious code
goes back many years (see the discussion in [493, p. 7].)
There are also viruses and worms, which are self-propagating malicious programs,
and to which I have referred repeatedly in earlier chapters. There is debate about the
precise definitions of these three terms: the common usage is that a Trojan horse is a
program that does something malicious (such as capturing passwords) when run by an
unsuspecting user; a worm is something that replicates; and a virus is a worm that rep-
licates by attaching itself to other programs.
18.4.1 Early History of Malicious Code
Malware seems likely to appear whenever a large enough number of users share a
computing platform. It goes back at least to the early 1960s. The machines of that era
were slow, and their CPU cycles were carefully rationed among different groups of
users. Because students were often at the tail of the queue—they invented tricks such
as writing computer games with a Trojan horse inside to check whether the program
Security Engineering: A Guide to Building Dependable Distributed Systems
380
was running as root, and if so to create an additional privileged account with a known
password. By the 1970s, large time-sharing systems at universities were the target of
more and more pranks involving Trojans. All sorts of tricks were developed.
In 1984, there appeared a classic paper by Thompson in which he showed that even
if the source code for a system were carefully inspected, and known to be free of vul-
nerabilities, a trapdoor could still be inserted. His trick was to build the trapdoor into
the compiler. If this recognized that it was compiling the login program, it would insert
a trapdoor such as a master password that would work on any account. Of course,
someone might try to stop this by examining the source code for the compiler, and then
compiling it again from scratch. So the next step is to see to it that, if the compiler rec-
ognizes that it’s compiling itself, it inserts the vulnerability even if it’s not present in
the source. So even if you can buy a system with verifiably secure software for the op-
erating system, applications and tools, the compiler binary can still contain a Trojan.
The moral is that you can’t trust a system you didn’t build completely yourself; vulner-
abilities can be inserted at any point in the tool chain [746].
Computer viruses also burst on the scene in 1984, thanks to the thesis work of Fred
Cohen. He performed a series of experiments with different operating systems that
showed how code could propagate itself from one machine to another, and (as men-
tioned in Chapter 7) from one compartment of a multilevel system to another. This
caused alarm and consternation; and within about three years, the first real, live viruses
began to appear “in the wild.” Almost all of them were PC viruses, as DOS was the
predominant operating system. They spread from one user to another when users
shared programs on diskettes or via bulletin boards.
One of the more newsworthy exceptions was the Christmas Card virus, which spread
through IBM mainframes in 1987. Like the more recent Love Bug virus, it spread by
email, but that was ahead of its time. The next year brought the Internet worm, which
alerted the press and the general public to the problem.
18.4.2 The Internet Worm
The most famous case of a service denial attack was the Internet worm of November
1988 [263]. This was a program written by Robert Morris Jr which exploited a number
of vulnerabilities to spread from one machine to another. Some of these were general
(e.g., 432 common passwords were used in a guessing attack, and opportunistic use
was made of .rhosts files), and others were system specific (problems with sendmail,
and the fingerd bug mentioned in Section 4.4.1). The worm took steps to camouflage
itself; it was called sh and it encrypted its data strings (albeit with a Caesar cipher).
Morris claimed that this code was not a deliberate attack on the Internet, merely an
experiment to see whether his code could replicate from one machine to another. It
could. It also had a bug. It should have recognized already infected machines, and not
infected them again, but this feature didn’t work. The result was a huge volume of
communications traffic that completely clogged up the Internet.
Given that the Internet (or, more accurately, its predecessor the ARPANET) had
been designed to provide a very high degree of resilience against attacks—up to and
including a strategic nuclear strike—it was remarkable that a program written by a stu-
dent could disable it completely.
What’s less often remarked on is that the mess was cleaned up, and normal service
was restored within a day or two; that it only affected Berkeley Unix and its deriva-
tives (which may say something about the dangers of the creeping Microsoft
Không có nhận xét nào:
Đăng nhận xét