Cyber security revisited: Security agents explained

While many security companies have devised myriad ways to address the malicious host problem, none have tried to attack this problem at the root level.
Written by David Lowenstein and Risu , Contributor

Commentary - It has always been easy to be the critic, point out the faults and shortcomings of others’ ideas and proposed solutions. Specifically, in recognition of deeply entrenched cyber security views and thinking, we believe it is imperative to re-examine the objectives, means and methods available to address cyber insecurity from the basic principles. In order to build a strong and well understood foundational framework for a new solution, it is essential to address the security of data in compromised environments, also known as the malicious host problem.

While many security companies have devised myriad ways to address the malicious host problem, none have tried to attack this problem at the root level. This article, examines a construct called ‘Security Agents.’ Before we delve into what this construct is, let’s start by outlining what it is not. The term ‘Security Agent’ shares properties, but should not be confused with, the more general term “software agents.” The term software agent refers primarily to software that acts on behalf of a user or a secure mobile agent, which in addition to acting on behalf of users, further requires the agent to be capable of autonomous movement from one device to another while preserving the integrity of its operations and data. In comparison, security agents are not nearly as intelligent, artificial or otherwise, as the other note agent architectures, are only mobile if a user wants them to be, and serve the specific purpose of ensuring the confidentiality, integrity and availability of a user’s data in a malicious host environment.

In our view, the minimum properties that comprise security agents include:

1. User/Owner Centric: Must serve the needs, objectives and purposes of the user, including, but not limited to, data protection.

2. Reduced Transitive Insecurity: Any system is comprised of components, and by definition, the more components of unknown or unverified properties that an agent must rely on; the less likely it is that it will be secure. The corollary of this axiom would be that any security system relying on a component that is known to be insecure cannot itself be secure, which we think is a construct of extreme importance.

3. Identity Aware: Given that a security agent’s objective is to serve the user in a trustworthy manner (not to act on behalf of, as noted earlier) a mechanism must exist to effectively establish a link “chaining” a user’s identity to their agent and facilitating an agent to distinguish between calls made by it, versus those made by other software.

4. Bi-directional Authentication: A system must not only authenticate the user, but the agent must also authenticate itself to the user.

5. Predicate Enforcement: In order to be effective, policies must not only be defined, but also enforced via predicates. Further, we believe that the policies can also be enforced through undetached decision- making and enforced mechanisms.

6. Trustworthy Trustmarks: Users must be able to understand and thus utilize simple human computer Interface cues, such as trustworthy trustmarks.

7. Privacy ‘and’ Security: By definition, security and privacy self-evidently share the mandate of confidentiality and thus should be concurrent design objectives. Further, specific privacy requirements should also be explicitly incorporated into agent design, such as by way of example, privacy as the default

8. Fail Securely: When encountering an unknown component or one whose security properties cannot be verified, the system should be designed to fail securely, such that under no circumstances and at all times, no data should ever be compromised.

9. Fail Gracefully: In the event that an agent or one of its components has been compromised or otherwise results in the loss of its expected security properties, a mechanism must exist to quickly and easily update all affected agents once the shortcoming has been rectified.

10. Self-verifying, Behaviorally Diversified and Scheduled Morphing Code: Given that the potency of all software is of some finite limit, the agent infrastructure must be capable of continuously updating itself with new, behaviorally diversified agents to extend the potency of any known set of technologies.

The architecture of security agents is based on the simple principle of taking complete responsibility for user data security as early as possible in the users’ interaction with a computing device and then maintaining it in a cryptographically altered state at all times thereafter. This hyper prophylactic idea, ‘seamless, end-to-end reverse sandboxing’ can eliminate the transitive risk from all other system components, that is, even if say the browser, Operating System (OS) or any third party app has vulnerabilities or can be compromised, the security of user data is no longer at risk.


David Lowenstein is the CEO and co-founder of Federated Networks, an IT security company. He has successfully led corporations in the business process outsourcing, education and environmental services industries. He is currently the Chairman of the Board of Princeton Review.

Risu Na is the CTO and co-founder of Federated Networks. He has led development teams to create e-learning systems and co-founded iSoftech, a cloud-based knowledge management software manufacturer.
Editorial standards