[ Pobierz całość w formacie PDF ]
From AntiVirus to AntiMalware Software and Beyond:
Another Approach to the Protection of Customers
from Dysfunctional System Behaviour
Dr. Klaus Brunnstein
Professor for Appplication of Informatics
Faculty for Informatics, University of Hamburg, Germany
Paper submitted to 22
National Information Systems Security Conference
Status: July 23, 1999
As users tend to rely on systems of growing complexity without themselves
being able to understand or control malevolent behaviour, threats contained in software
must be well understood. The paper deals with different aspects of malicious software
(malware), both self-replicating (aka viruses and worms) and "pure" payloads (aka Trojan
Horses) which are understood as additional though unwished and unspecified features of
systems of programs; such system or software features are regarded as "
As traditional definitions somewhat lack consistency which is prerequisite to describing
complex dysfunctionalties, and as they are partially self-contradicting and incomplete
concerning recent threats, a definition is developed which distinguishes "normal"
dysfunctionalties (produced through weaknesses of contemporary Software Engineering)
from "intentionally malevolent" ones. Complex real threats may be built from two atomic
types, namely self-replicating and Trojanic elements, each of which may act under some
trigger condition. Based on experiences collected from tests, AntiMalware methods need
further developments, both concerning classification of newly experienced threats and
concerning online detection in user systems.
1) Introduction: About dysfunctional software and user vulnerability:
With further growing velocity, size and functional complexity of digital artifacts (aka computers,
network systems, digitally-controlled infrastructures etc), users become both growingly
dependent upon proper work of those artifacts (from hardware and device drivers to operating
systems and application software), and at the same time they become lesser and lesser able to
understand and control whether some observed function or system behaviour is "what they need
or should get". While the WYSIWYG principle ("What You See Is What You Get") postulates
that any internal behaviour may be "observed" by its visual effects, this principle is not applicable
to complex system functions (e.g. interoperation of tasks in a multi-tasking operating system),
and it is even less applicable to observing functions and impact of "active content" travelling
through networks and influencing local systems via hidden entries in network software (browsers
etc). In some sense, Ralph Nader`s observation (when adressing missing safety features of
automobiles, in the 1950`s) "unsafe at any speed" is even more applicable to contemporary
Iinformation and Communication Technologies.
Nobody can therefore be surprised that users have difficulties to understand "unforeseen" effects.
Based on some common understanding that present systems are not sufficiently secure and safe,
many users (including IT specialists whose expertise is in other areas than security and safety)
tend to project any unforeseen effect onto the irrational diagnosis "I have been hit by some new
virus", even if actual AV products dont support any such suspicion (e.g. based on their heuristic
methods). Both this "viral assumption" and the usual attempt to escape "ill understood"
situations by simply restarting the system (the key combination CTRL+ALT+DEL is what most
users learn first) originate from the fact that users have (almost) no means or information at all
(except accumulated experience) to understand the "proper" work of their digital artifacts, and
they therefore cannot develop some understanding about whether a deviating behaviour is
possibly hazardous.
One basic reason for the powerlessness of users comes from the fact that features of all these
technologies related to digital artifacts are almost exclusively determined (that is: specified,
designed, implemented, distributed, installed, maintained and updated) by the views of their
manufacturers, and that the dominance of the supply side is not balanced to any meaningful
degree by requirements of the users. Even worse: customers have just to "Take What You Get"
(TWYG) which in turn makes "control" and "understanding" difficult if not impossible. In
contemporary systems, users are just "using" technologies but are surely not "in control". With
the advent of truely network-based artifacts (from applets to other forms of active content
travelling though networks), "user control" becomes even more difficult to achieve.
As enterprises, agencies, institutions and individuals continue to build their economic existence
upon such digital artifacts, vulnerability of such entities grows correspondingly. If the "normal"
functions of these technologies are hardly masterable, malicious intent or ill-advised
experimentation (e.g. of youngsters spreading viruses, Trojans or hacker toolkits) tends to
further increase the vulnerability of enterprises, governments, individuals and societies.
Consequently, there is strong need to find new ways to (somewhat) empower users to master
failures of digital artifacts performing "essential" (if not "mission critical") work.
For some time, potentially hazardeous software has been growing both in numbers and in
diversity of types. While old-fashioned system and file viruses (infecting either systems via boot
processes or with the help of executable "host" programs) tend to be contained (although still
growing in numbers and diversity, as the recent growth of 32-bit PE infectors demonstrates), the
advent of powerful "script languages" such as Visual Basic for Applications (VBA) or Java has
significantly increased threats of self-reproducing software both for local systems ("viruses") and
for networks ("worms"). As such script languages become broadly used in standard and
application software (e.g. from office applications to CAD/CAM systems, from distributed
databases to electronic commerce), and as they closely interact with their operating software,
active contents import growing risks into enterprises, offices and workstations of individuals.
Following traditional thinking, any risk of digital artifacts must be balanced with other
"adequate" and "adapted" digital artifacts. In this respect, growingly complex "guardians" (from
filters in firewalls to on-access scanners in servers and workstations) aim at protecting users - if
properly maintained - from related (esp. known) threats. This approach feeds a whole branch of
security expertise. But customer protection then depends upon "adequate reaction" and proper
consciousness of related experts. Risks of this approach can well be illustrated with cases of
"Non-Viral Malware" (NVM). While all manufacturers of AntiVirus Software are working very
hard to keep in pace with the developments of new self-reproducing software (both concerning
viruses and worms), opinions are strongly divided whether and how to protect users from non-
self-replicating malware such as Trojan Horses. While some AV experts are pragmatic enough to
help protecting their customers from such threats even if antiviral methods don´t apply "well" to
such "pure" payloads, others argue that mechanisms to handle self-reproducing software is not
optimized to handle other malware (which is technically correct to some degree), and that one
should consequently not care for Trojans as long as customers don´t broadly complain about such
Indeed, one essential problem of the contemporary approach to defining "malware" starts from
the assumption that a software program may contain some element (a virus or Trojan payload)
that its manufacturer did not intend or implement. Moreover, finding such malware can usually
not be based on any information from the manufacturer how the related original software works.
To the contrary, AntiMalware experts must often apply Reverse-Engineering techniques to
understand the intended ("normal") software behaviour; such analyses need high technical
competence are are extremely time-consuming. The application of such techniques, however,
may seriously contradict the interests (e.g. "intellectual property rights") of the original
Without putting contemporary Information and Communication technologies at disposal "in
principle", this paper analyses whether at least avoidance of or, if some threat materializes,
detection and cure of "malicious software" can be handled in a different way to support user
abilities to understand and "control" what is going on.
2) Traditional classification of types of malware:
Within the IT/Network Security and Safety curriculum (4 semesters = 2 years) for advanced
students at the Faculty for Informatics at Hamburg university (first full cycle started in winter
semester 1988/89), computer viruses and other forms of malicious software have been analysed
(as practical examples in learning reverse-engineering methods) in some detail since the advent of
first viruses (on PCs, Brain/Pakistani boot virus in 1986 and Jerusalem file virus in 1987). Based
on cooperation with several AntiVirus experts, antiVirus Test Center (VTC) maintains databases
of viral and malicious software against which regular AV tests have been performed since 1991
[VTC Uni-Hamburg].
Originally, the term "computer virus" was introduced by F. Cohen in his doctoral thesis [Cohen
1986]. His theoretical approach (describing self-replication modelled upon a Turing machine),
although more systematic than others, did not influence the practical development of viruses and
countermeasures. Moreover, Cohen`s definition (nased on a Turing machine) is not directly
applicable to other forms of malicious software including self-replication in networks (aka
propagation). On the "practical" (that is also: less systematic) level, there are almost as many
different definitions of computer viruses, and some books also describe other forms of "rogue"
software [e.g. Brunnstein 1989; Ferbrache 1992; Highland 1990; Hoffman 1990; Slade 1994;
Solomon 1991].
In his doctoral thesis, V. Bontchev [Bontchev 1998] gives a survey of the most relevant types of
malware. Some of his most important definitions are:
Logic Bombs:
logic bombs
are the simplest example of malicious code. They are rarely
stand-alone programs. Most often, they are a piece of code embedded in a
larger program. The embedding is usually done by the programmer (or one of
the programmers) of the larger program."
Trojan Horses:
Trojan Horse
is a program which performs (or claims to perform)
something useful, while in the same time intentionally performs,
unknowingly to the user, some kind of destructive function. This destructive
function is usually called a
Subtypes of Trojan Horses are: Regular Trojan Horses (available from BBS), Trapdoors, Droppers,
Injectors and Germs
is a special kind of Trojan Horse, the payload of which is to
install a virus on the system under attack. The installation is performed on
one or several infectable objects on the targeted system."
is a program very similar to a dropper, except that it installs a
virus not on a program but in memory."
" A
is a program produced by assembling or compiling the original
source code (or a good disassembly) of a virus or of an infected program. The
germ cannot be obtained via a natural infection process. Sometimes the germs
are called
first generation viruses
Computer Virus:
" A
computer virus
is a computer program which is able to replicate itself by
attaching itself in some way to other computer programs. ... (The) two main
properties of the computer viruses (are) —merely that a virus is able to
replicate itself and that it does it by always attaching itself in some way to
another, innocent program. This process of virus replication and attaching to
another program is called
. The other program, i.e., the program that
is infected by the virus is usually called a
or a
"Programs which are able to replicate themselves (usually across computer
networks) as stand-alone programs (or sets of programs) and which do not
depend on the existence of a host program are called computer
Subtypes of "Worms":
Chain Letters, Host Computer Worms (with a special form called Rabbits),
and Network Worms (with its special form "Octopus" where the central
segment manages the worm`s behaviour on the network).
Based on knowledge of its time, such definitions are rather "ad hoc" and they are neither sufficiently
systematic nor applicable to forms of malware unexperienced at the time when related papers were
published. This can well be studied - from today`s views - with Bontchev`s definitions:
From the perspective of a
general form of "payload"
(which is also inherent in most "real"
viruses), "logic bombs" are a special case of "Trojan Horses" (with trigger conditions of type
"logic", and with the special case of "time bombs" where trigger condition is a logical
condition including time/clock setting).
From the general case of
"self-reproducing software",
there exist two cases, namely
reproduction in single systems
(either "viruses" or "host worms"), and
propaga-tion in
(as originally described by Shoch-Hupp).
For viruses,
needs some sort of
which may EITHER be a (compiled)
program (as discussed by Bontchev, applicable under "traditional" operating systems such as
DOS, UNIX or VMS) OR some form of "active content" (applicable to inter-preted systems
such as Microsoft`s Visual Basic for Applications (VBA5/6), Visual Basic Script (VBS), or
JavaScript and Java Applets). The latter case - not explicitly foreseen in Bontchev`s
definition of viruses - applies to recently important cases as macro viruses and Java viruses
(Strange Brew and BeanHive).
Interestingly, the well-known existence of
"virus toolkits"
such as "mutating engines"
(which may be used to add polymorphic features to some viral code) and of
construction toolkits"
(which allow laymen to construct their own viruses and Trojan
Horses) are not addressed in these definitions, despite their broad availability (and although
AV products have detected such forms for some time).
With rapid deployment of new forms of complex malicious software, and with features from several
different categories - such as WNT/RemoteExplorer, which is a worm carrying and dropping a virus
that has a special payload - there is some need for a systematic classification which also permits one
to differentiate between intentionally constructed "malicious software" and, on the other hand, less
intended but equally destructive instances in "normal" software.
Moreover, a new view of "hostile" software is needed as contemporary threats must address much
more than traditional forms such as hacking and viruses. Under "holistic" perspectives [Brunnstein
1997] of security (traditionally addressing hacking and viruses) and safety (including also network
attacks such as Denial-of-Service, spoofing etc), understanding of "improper functioning" or
dysfunctional operation
needs broader perspectives than "traditional" (less holistic) views.
3) Towards a systematic classication of "dysfunctional software"
Let us start with the
any software
that may become "essential" for some
business or individual (in the sense that this institution becomes "dependant" upon its proper
working, and that improper working and functions may lead to vulnerability), is the
product of a
systematic, well documented and controlled engineering process
. Ideally, this "Software
Engineering" (SE) process would start with
codification of "requirements"
that must be
fulfilled when the software is used in the related application domains. At least, such requirements
are needed where systems are
"mission critical"
for the proper work of systems concerning risks
of life and health. Even for such critical applications, it is not always possible to anticipate in
which environnments and under which conditions related software may work in the future.
Formal codification of requirements can therefore not generally be postulated.
Remark: it must be admitted that the process described above does hardly apply to
contemporary systems and software for business applications. Even somewhat "alien"
functions are often regarded to be "not a bug but a system feature". On the other hand,
software for critical applications is developed with such systematic SE processes. With
[ Pobierz całość w formacie PDF ]