CSE 127: Lecture 1
The topics covered in this lecture are
course code of ethics,
course syllabus,
and
security/threat model.
Unlike almost all of your other courses, you will learn things in this
class that you should never apply in real life. This is
because in order to learn to build robust computer systems and to
learn to protect systems, you must not only learn the basic sound
engineering techniques, but you must also know about the weaknesses
that the attackers will strive to exploit. To gain a solid understand
of these weaknesses, you will be learning how systems can be
penetrated --- you should never make use of this knowledge to
penetrate real systems unless you have been given explicit
permission by the owners/administrators of the target systems to do
so; furthermore, you should not aid others in breaking into
systems uninvited in any way, including teaching them what you learned
in this class. To pass on what you learned, you must similarly ask
those whom you teach to sign the following Computer Security Code of
Ethics.
In order to get a grade for the course, you must agree to not break
into computer systems without explicit permission, and you must
understand that if I find out that you violated this, I will place a
letter with your transcript that describes what happened. Please
print out and sign the following and return it to me during the third
lecture: Code of Ethics [PS].
See the online syllabus.
A "security model" is used to define what the security assumptions
are, to determine what is at risk, to try to quantify that risk, and
to figure out what are appropriate ways to protect what is at risk.
We build a security model by determining the following: goal
identification, threat assessment, and security assumptions.
Security Goals
Determining our security goals is simple: exhaustively enumerating
what you want protected. The catch is that if you miss something and
the security system that you design/build doesn't happen to protect
the thing that you missed, then you lose. Knowing what should be
protected, next determine how much this is worth, what are the down
sides if security is violated. This is usually done in conjunction
with determining what the security requirements are -- in what way are
the assets to be protected.
Note that this the initial security requirements are not final. The
reason for this is that it may be too costly to provide the desired
security properties -- or perhaps it may even be impossible. The
system security posture cannot be fully determined until technical
feasibility can be addressed.
Assets / Requirements
What are some typical security assets to be protected? Often it is
some data in a computer file that should remain secret or unmodified.
Sometimes it is not allowing unauthorized users to run certain
programs, or run programs with certain parameters. In the world of
electronic commerce, this would also include having web servers up and
running as an asset: this is important for providing product
information, for investor relations, for actual product purchase
(whether business-to-business or business-to-consumer), etc.
To the military, keeping what should be secret secret is often
critical. If you have seen the movie
Mission
Impossible (1996), you have been exposed to the idea, albeit a
sensationalized Hollywood version. (There is no ``NOC list''.)
Confidentiality
Confidential information have various characteristics. An obvious one
is who has access and who doesn't, or in other words, who is
authorized to access the data. Another is that confidential
data typically have a lifetime. Certainly, battle plans must be
confidential during the planning and early deployment stage, but after
the war is over there is no need for secrecy: historians and war buffs
ought to be able to get their hands on them. Similarly, in the
corporate environment, plans for a new product are confidential during
product inception, prototype development, and perhaps early
manufacturing, but after the key ideas are patented and the initial
rollout has occurred, there's no need for secrecy: as a matter of
fact, the marketing department typically wants to widely disseminate
the information to the consumers.
Integrity
System and data integrity is another often cited security goal. Even
if that valuable trade secret remains secret, it would be very bad if
an adversary can destroy it sight unseen. Imagine Coke's secret
formula were stored in a computer (instead of the purported safe in
Atlanta, Georgia) could be destroyed by Pepsi using a computer virus.
(Not that Pepsi would do such a thing.) The public relations damage
would be terrible -- Coke has built a sense of mystique about its
secret formula, and regardless of whether it can quickly develop
another formulation that tastes similar to the original, it may very
well lose market share if consumers couldn't get the Real Thing any
more.
The military version of this is pretty obvious. Suppose battle plans
could be altered (e.g., to a bad one or just one that is known), or
perhaps the software for targetting computers for ICBMs changed so all
target coordinates that are entered are treated as if they were the
coordinates for Washington D.C. Even though the original secret
battle plans are not revealed to the enemy and the ICBMs aren't
destroyed, the results are still disasterous.
Availability
Availability of the assets is important. Even if the attacker does
not obtain a copy (violate confidentiality) or destroys your data
assets, merely keeping you from using it can cause real damage.
Availability attacks have been in the news: distributed denial of
service (DDOS) attacks have shut down or seriously crippled several
important web sites. This is a form of jamming attack; the physical
analog to this is jamming radio waves by broadcasting a stronger
(noisy) signal.
There are other kinds of denial of service attacks which do not
generate lots of network activity. One example is the so-called
``ping of death'' packets of a few years ago, where an attacker can
send a malformed network packet which causes the receiving operating
system to lock up.
Non-repudiation
Non-repudiation goes to the heart of responsibility. It means that
you cannot repudiate or disclaim that you did something. There are
different ways that this is done in computer systems. The first is
through positive user authentication, authorization, and audit trails.
This is used, for example, for commands that switch privilege levels
(su, sudo, etc in Unix) or for certain kinds of network accesses
(identd RFC 1413). The second is via cryptographic techniques,
typically digital signatures.
In the physical world, a common way for non-repudiation to be achieved
through written signatures, whether they are on checks, credit card
receipts, or similar money instruments. Signatures on other forms of
legal documents such as contracts similarly cause a binding
relationship to be formed, whereby neither of the parties involved can
escape their obligations.
Continued in next lecture...
[
search CSE |
CSE |
bsy's home page |
links |
webster |
MRQE |
google |
yahoo |
citeseer |
certserver
]
bsy+cse127w02@cs.ucsd.edu, last updated Mon Mar 25 15:22:08 PST 2002. Copyright 2002 Bennet Yee.
email bsy.