Proxedo Network Security Suite 2 Administrator Guide

Copyright 2021 BalaSys IT Security.. All rights reserved. This document is protected by copyright and is distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this document may be reproduced in any form by any means without prior written authorization of BalaSys.

This documentation and the product it describes are considered protected by copyright according to the applicable laws.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/). This product includes cryptographic software written by Eric Young (eay@cryptsoft.com)

Linux™ is a registered trademark of Linus Torvalds.

Windows™ 10 is registered trademarks of Microsoft Corporation.

The BalaSys™ name and the BalaSys™ logo are registered trademarks of BalaSys IT Security.

The PNS™ name and the PNS™ logo are registered trademarks of BalaSys IT Security.

AMD Ryzen™ and AMD EPYC™ are registered trademarks of Advanced Micro Devices, Inc.

Intel® Core™ and Intel® Xeon™ are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

All other product names mentioned herein are the trademarks of their respective owners.

DISCLAIMER

BalaSys is not responsible for any third-party websites mentioned in this document. BalaSys does not endorse and is not responsible or liable for any content, advertising, products, or other material on or available from such sites or resources. BalaSys will not be responsible or liable for any damage or loss caused or alleged to be caused by or in connection with use of or reliance on any such content, goods, or services that are available on or through any such sites or resources.

October 31, 2024

Abstract

This document is the primary guide for Proxedo Network Security Suite administrators.


Table of Contents

Preface
1. Summary of contents
2. Target audience and prerequisites
3. Products covered in this guide
4. Contact and support information
4.1. Sales contact
4.2. Support contact
4.3. Training
5. About this document
5.1. Feedback
1. Introduction
1.1. What PNS is
1.2. Who uses PNS?
2. Concepts of the PNS Gateway solution
2.1. Main components of the PNS Gateway solution
2.1.1. PNS
2.1.2. Management Server (MS)
2.1.3. Transfer Agent
2.1.4. Management Console (MC)
2.1.5. Authentication Server (AS)
2.1.6. The concept of the CF framework
2.1.7. Virtual Private Networking (VPN) support
2.1.8. Native services
2.1.9. High Availability
2.1.10. Operating system
2.2. The concepts and architecture of PNS firewalls
2.2.1. Access control
2.2.2. Operation modes of PNS
2.2.3. Proxying connections
2.2.4. Traffic analysis with proxies
2.2.5. Proxy customization
2.2.6. Modular architecture
3. Managing PNS hosts
3.1. MS and MC
3.1.1. Defining a new host and starting MC
3.2. MC structure
3.2.1. Configuration tree
3.2.2. Main workspace
3.2.3. Menu & status bars and Preferences
3.3. Configuration and Configuration management
3.3.1. Configuration process
3.3.2. Configuration buttons
3.3.3. Committing related components
3.3.4. Recording and commenting configuration changes
3.3.5. Multiple access and lock management
3.3.6. Status indicator icons
3.3.7. Copy, paste and multiple select in MC
3.3.8. Links and variables
3.3.9. Disabling rules and objects
3.3.10. Filtering list entries
3.4. Viewing PNS logs
3.4.1. The command bar of the log viewer
4. Registering new hosts
4.1. Bootstrap a new host
4.2. Reconnecting to a host
4.2.1. Reconnecting MS to a host
5. Networking, routing, and name resolution
5.1. Configuring networking interfaces
5.1.1. General interface configuration
5.1.2. Configuring virtual networks and alias interfaces
5.1.3. Configuring bond interfaces
5.1.4. Configuring bridge interfaces
5.1.5. Enabling spoof protection
5.1.6. Interface options and activation scripts
5.1.7. Interface status and statistics
5.2. Managing name resolution
5.3. Managing client-side name resolution
5.3.1. Configure name resolution
5.4. The routing editor
5.4.1. Routes
5.4.2. Sorting, filtering, and disabling routes
5.4.3. Managing the routing tables locally
6. Managing network traffic with PNS
6.1. Understanding Application-level Gateway policies
6.2. Zones
6.2.1. Managing zones with MC
6.2.2. Creating new zones
6.2.3. Zone hierarchies
6.2.4. Using hostnames in zones
6.2.5. Finding zones
6.2.6. Exporting zones
6.2.7. Importing zones
6.2.8. Deleting a zone or more zones simultaneously
6.3. Application-level Gateway instances
6.3.1. Understanding Application-level Gateway instances
6.3.2. Managing Application-level Gateway instances
6.3.3. Creating a new instance
6.3.4. Configuring instances
6.3.5. Instance parameters — general
6.3.6. Instance parameters — logging
6.3.7. Instance parameters — Rights
6.3.8. Instance parameters — miscellaneous
6.3.9. Increasing the number of running processes
6.4. Application-level Gateway services
6.4.1. Creating a new service
6.4.2. Creating a new packet filtering Service (PFService)
6.4.3. Creating a new DenyService
6.4.4. Creating a new DetectorService
6.4.5. Routing — selecting routers and chainers
6.5. Configuring firewall rules
6.5.1. Understanding Application-level Gateway firewall rules
6.5.2. Transparent and non-transparent traffic
6.5.3. Finding firewall rules
6.5.4. Creating firewall rules
6.5.5. Tagging firewall rules
6.5.6. Configuring nontransparent rules with inband destination selection
6.5.7. Connection rate limiting
6.6. Proxy classes
6.6.1. Customizing proxies
6.6.2. Renaming and editing proxy classes
6.6.3. Analyzing embedded traffic
6.7. Policies
6.7.1. Creating and managing policies
6.7.2. Detector policies
6.7.3. Encryption policies
6.7.4. GeoIP policies
6.7.5. GeoLocationLimit
6.7.6. GeoPacketLimit
6.7.7. Limit policies
6.7.8. PacketLimit
6.7.9. Matcher policies
6.7.10. NAT policies
6.7.11. Resolver policies
6.7.12. Stacking providers
6.8. Monitoring active connections
6.9. Traffic reports
6.9.1. Configuring PNS reporting
7. Logging with syslog-ng
7.1. Introduction to syslog-ng
7.1.1. Global options
7.1.2. Sources
7.1.3. Destinations
7.1.4. Filters
7.2. Configuring syslog-ng with MC
7.2.1. Configure syslog-ng
7.2.2. Configuring syslog-ng components through MC
7.2.3. Configuring TLS-encrypted logging
8. The Text editor plugin
8.1. Using the Text editor plugin
8.1.1. Configure services with the Text editor plugin
8.1.2. Use the additional features of Text editor plugin
9. Native services
9.1. BIND
9.1.1. BIND operation modes
9.1.2. Configuring BIND with MC
9.1.3. Setting up split-DNS configuration
9.2. NTP
9.2.1. Configuring NTP with MC
9.2.2. Status and statistics
9.3. Postfix
9.3.1. Configuring Postfix with MC
9.4. Local services on PNS
9.4.1. Enabling access to local services
10. Local firewall administration
10.1. Linux
10.2. Login to the firewall
10.3. Editing configuration files
10.4. Network configuration
10.5. System logging
10.6. NTP
10.7. BIND
10.8. Updating and upgrading your PNS hosts
10.9. Packet filter
10.10. PNS configuration
10.10.1. Policy.py and instances.conf
10.10.2. Application-level Gateway control
10.11. Managing core dump files
11. Key and certificate management in PNS
11.1. Cryptography basics
11.1.1. Symmetric and asymmetric encryption
11.2. PKI Basics
11.2.1. Centralized PKI system
11.2.2. Digital certificates
11.2.3. Creating and managing certificates
11.2.4. Verifying the validity of certificates
11.2.5. Verification of certificate revocation state
11.2.6. Authentication with certificates
11.2.7. Digital encryption in work
11.2.8. Storing certificates and keys
11.2.9. Using Hardware Security modules
11.3. PKI in MS
11.3.1. Committing changes and locking in PKI
11.3.2. The certificate entity
11.3.3. Rules of distribution and owner hosts
11.3.4. Trusted groups
11.3.5. The PKI menu
11.3.6. PKI management
11.3.7. Trusted CAs
11.3.8. Managing certificates
12. Clusters and high availability
12.1. Introduction to clustering
12.2. Clustering solutions
12.2.1. Fail-Over clusters
12.2.2. Load balance clusters
12.3. Managing clusters with MS
12.4. Creating clusters
12.4.1. Creating a new cluster (bootstrapping a cluster)
12.4.2. Adding new properties to clusters
12.4.3. Adding a new node to a PNS cluster
12.4.4. Converting a host to a cluster
12.5. Keepalived for High Availability
12.5.1. Functionality of Keepalived
12.5.2. Prerequisites for configuring Keepalived
12.5.3. Configuring Keepalived
12.5.4. Configuration examples and best practices for Keepalived configuration
12.6. Availability Checker
12.6.1. Prerequisites for configuring the Availability Checker plugin
12.6.2.
13. Advanced MS and Agent configuration
13.1. Setting configuration parameters
13.1.1. Configuring user authentication and privileges
13.1.2. Configuring backup
13.1.3. Configuring the connection between MS and MC
13.1.4. Configuring MS and agent connections
13.1.5. Configuring MS database save
13.1.6. Setting configuration check
13.1.7. Configuring CRL update settings
13.1.8. Set logging level
13.1.9. Configuring SSL handshake parameters
13.2. Setting agent configuration parameters
13.2.1. Configuring connections for agents
13.2.2. Configuring connection to engine
13.2.3. Configuring logging for agents
13.2.4. Configuring SSL handshake parameters for agents
13.3. Managing connections
13.3.1. Setting up initial connection with management agents
13.3.2. Configuring connection with agents
13.3.3. Administering connections
13.3.4. Configuring recovery connections
13.4. Handling XML databases
14. Virus and content filtering using CF
14.1. Content Filtering basics
14.1.1. Quarantining
14.2. Content Filtering with CF
14.2.1. Creating module instances
14.2.2. Creating scanpaths
14.2.3. Routers and rule groups
14.2.4. Configuring PNS proxies to use CF
14.2.5. Managing CF performance and resource use
14.3. Quarantine management in MC
14.3.1. Information stored about quarantined objects
14.3.2. Configuring quarantine cleanup
15. Connection authentication and authorization
15.1. Authentication and authorization basics
15.1.1. Inband authentication
15.1.2. Outband authentication
15.2. The concept of AS
15.2.1. Supported backends and authentication methods
15.3. Authenticating connections with AS
15.3.1. Configuring AS
15.3.2. Authentication of PNS services with AS
15.3.3. Authorization of PNS services
15.3.4. Configuring the Authentication Agent
15.4. Logging in AS
16. Virtual Private Networks
16.1. Virtual Private Networking basics
16.1.1. Types of VPN
16.1.2. VPN topologies
16.1.3. The IPSec protocol
16.1.4. The OpenVPN protocol
16.2. Using VPN connections
16.2.1. Using VPN connections
16.3. Configuring IPSec connections
16.3.1. Configuring IPSec connections
16.3.2. IPSec options
16.3.3. Global IPSec options
16.4. Configuring SSL (OpenVPN) connections
16.4.1. Configuring SSL connections
16.4.2. SSL options
17. Integrating PNS to external monitoring systems
17.1. Monitoring PNS with Munin
17.2. Installing a Munin server on a MS host
17.3. Monitoring PNS with Nagios
A. Keyboard shortcuts in Management Console
A.1. Function keys
A.2. Shortcuts
A.3. Access keys
B. Further readings
B.1. PNS-related material
B.2. General, Linux-related materials
B.3. Postfix documentation
B.4. BIND Documentation
B.5. NTP references
B.6. SSH resources
B.7. TCP/IP Networking
B.8. Netfilter/nftables
B.9. General security-related resources
B.10. syslog-ng references
B.11. Python references
B.12. Public key infrastructure (PKI)
B.13. Virtual Private Networks (VPN)
C. Proxedo Network Security Suite End-User License Agreement
C.1. 1. SUBJECT OF THE LICENSE CONTRACT
C.2. 2. DEFINITIONS
C.3. 3. LICENSE GRANTS AND RESTRICTIONS
C.4. 4. SUBSIDIARIES
C.5. 5. INTELLECTUAL PROPERTY RIGHTS
C.6. 6. TRADE MARKS
C.7. 7. NEGLIGENT INFRINGEMENT
C.8. 8. INTELLECTUAL PROPERTY INDEMNIFICATION
C.9. 9. LICENSE FEE
C.10. 10. WARRANTIES
C.11. 11. DISCLAIMER OF WARRANTIES
C.12. 12. LIMITATION OF LIABILITY
C.13. 13.DURATION AND TERMINATION
C.14. 14. AMENDMENTS
C.15. 15. WAIVER
C.16. 16. SEVERABILITY
C.17. 17. NOTICES
C.18. 18. MISCELLANEOUS
D. Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License

List of Examples

3.1. Referring to components with variables
5.1. Referencing static and dynamic interfaces in firewall rules
6.1. Using the Internet zone
6.2. Subnetting
6.3. Finding IP networks
6.4. Customized logging for HTTP accounting
6.5. Overriding the target port SQLNetProxy
6.6. Overriding the target port SQLNetProxy
6.7. RFC-compliant proxying in Application-level Gateway
6.8. Virus filtering and stacked proxies
6.9. Defining a Detector policy
6.10. GeoPacketLimit example settings
6.11. PacketLimit example settings
6.12. DNSMatcher for two domain names
6.13. Defining a RegexpMatcher
6.14. Blacklisting e-mail recipients
6.15. SmtpProxy class using a matcher for controlling relayed zones
6.16. Address translation examples using GeneralNAT
6.17. Defining a Resolver policy
6.18. Using HashResolver to direct traffic to specific servers
7.1. Selecting log messages from Postfix using filter
7.2. Setting up a router
9.1. Forward-only DNS server
9.2. Split-DNS implementation
9.3. Special requirements on mail handling
10.1. Specifying the target IP address of a TCP destination
15.1. BasicAccessList

List of Procedures

2.1.6.1. Content Filtering with CF
3.1.1. Defining a new host and starting MC
3.2.1.3.1. Adding new configuration components to host
3.2.3.1. Configuring general MC preferences
3.2.3.2. Configuring PNS Class Editor preferences
3.2.3.3. Configuring PNS Rules preferences
3.2.3.4. Configuring MS hosts
3.2.3.6.1. Defining variables
3.2.3.6.2. Editing variables
3.2.3.6.3. Deleting variables
3.3.1.1. Configuring PNS - the general process
3.3.4. Recording and commenting configuration changes
4.1. Bootstrap a new host
4.2.1. Reconnecting MS to a host
5.1.1.1. Configuring a new interface
5.1.2.1. Creating a VLAN interface
5.1.2.2. Creating an alias interface
5.1.3. Configuring bond interfaces
5.1.4. Configuring bridge interfaces
5.1.5.1. Configuring spoof protection
5.1.6.1.1. Creating interface activation scripts
5.1.6.2.1. Creating interface groups
5.1.6.3.1. Configuring interface parameters
5.3.1. Configure name resolution
5.4.2.1. Filtering routes
6.2.2. Creating new zones
6.2.3.1. Organizing zones into a hierarchy
6.2.6. Exporting zones
6.2.7. Importing zones
6.2.8. Deleting a zone or more zones simultaneously
6.3.3. Creating a new instance
6.3.4. Configuring instances
6.3.9. Increasing the number of running processes
6.4.1. Creating a new service
6.4.2. Creating a new packet filtering Service (PFService)
6.4.3. Creating a new DenyService
6.4.4. Creating a new DetectorService
6.4.5.1. Setting routers and chainers for a service
6.5.3. Finding firewall rules
6.5.4. Creating firewall rules
6.5.5. Tagging firewall rules
6.5.7. Connection rate limiting
6.6.1.1. Derive a new proxy class
6.6.1.2. Customizing proxy attributes
6.6.2. Renaming and editing proxy classes
6.6.3.1. Stack proxies
6.7.1. Creating and managing policies
6.7.10.1.1. Configuring NAT
6.9.1. Configuring PNS reporting
7.2.1. Configure syslog-ng
7.2.2.1.1. Set global options
7.2.2.2.1. Create sources
7.2.2.2.2. Create drivers
7.2.2.4.1. Set filters
7.2.2.5.1. Configure routers
7.2.3. Configuring TLS-encrypted logging
8.1.1. Configure services with the Text editor plugin
8.1.2. Use the additional features of Text editor plugin
9.1.2.1. Configuring BIND with MC
9.1.3. Setting up split-DNS configuration
9.2.1. Configuring NTP with MC
9.3.1.1. Configuring Postfix with MC
9.4.1. Enabling access to local services
10.8. Updating and upgrading your PNS hosts
10.10.1.1. Edit the Policy.py file
11.1.1.4.1. Procedure of encrypted communication and authentication
11.2.3.1. Creating a certificate
11.3.7.2. Creating a new CA
11.3.7.4. Signing CA certificates with external CAs
11.3.8.2. Creating certificates
11.3.8.3. Revoking a certificate
11.3.8.4. Deleting certificates
11.3.8.5. Exporting certificates
11.3.8.6. Importing certificates
11.3.8.7. Signing your certificates with external CAs
11.3.8.8. Importing certificates with external private key
11.3.8.9. Monitoring licenses and certificates
12.4.1. Creating a new cluster (bootstrapping a cluster)
12.4.2. Adding new properties to clusters
12.4.3. Adding a new node to a PNS cluster
12.4.4. Converting a host to a cluster
12.5.3.1. Configure Keepalived
12.5.4.1. Simple Cluster with 2 nodes
12.5.4.2. Testing or Pilot node
12.5.4.3. Multiple backup nodes
12.5.4.4. Multiple VRRP groups in the same cluster
12.5.4.5. Managing individual OpenVPN tunnels
12.6.2.1. Configuring the Availability Checker
13.1.1.1. Adding new users to MS
13.1.1.2. Deleting users form MS
13.1.1.3. Changing passwords in MS
13.1.1.4.1. Editing user privileges in MS
13.1.1.5.1. Modifying authentication settings
13.1.2.1. Configuring automatic MS database backups
13.1.2.2. Restoring a MS database backup
13.1.3.1. Configuring the bind address and the port for MS-MC connections
1. Using linking for the IP address
13.1.4. Configuring MS and agent connections
13.1.5. Configuring MS database save
13.1.8. Set logging level
13.1.9. Configuring SSL handshake parameters
13.2.3. Configuring logging for agents
13.2.4. Configuring SSL handshake parameters for agents
13.3.3. Administering connections
13.3.4. Configuring recovery connections
14.2.1.1. Creating a new module instance
14.2.2.1. Creating a new scanpath
14.2.3.1. Creating and configuring routers
14.2.4.1. Configuring communication between PNS proxies and CF
15.1.2.1. Outband authentication using the Authentication Agent
15.3.1.1.1. Creating a new instance
15.3.2.1. Configuring communication between PNS and AS
15.3.2.2. Configuring PNS Authentication policies
15.3.3.1. Configuring authorization policies
16.2.1. Using VPN connections
16.3.1. Configuring IPSec connections
16.4.1. Configuring SSL connections
16.4.2.1. Configuring the VPN management daemon
17.1. Monitoring PNS with Munin
17.2. Installing a Munin server on a MS host
17.3. Monitoring PNS with Nagios

Preface

Welcome to the Proxedo Network Security Suite 2 Administrator Guide!

This document describes how to configure and manage Proxedo Network Security Suite 2 and its components. Background information for the technology and concepts used by the product is also discussed.

1. Summary of contents

Chapter 1, Introduction describes the main functionality and purpose of the Proxedo Network Security Suite.

Chapter 2, Concepts of the PNS Gateway solution describes the features and capabilities of the different components of PNS, as well as the concepts of PNS.

Chapter 3, Managing PNS hosts describes the main configuration utility of PNS.

Chapter 4, Registering new hosts explains how to manage several firewalls using a single management server.

Chapter 5, Networking, routing, and name resolution describes the management of network interfaces, such as Ethernet cards.

Chapter 6, Managing network traffic with PNS describes how to customize the firewall system for optimal security.

Chapter 7, Logging with syslog-ng introduces the capabilities of syslog-ng.

Chapter 8, The Text editor plugin discusses how to manage external services from Management Console.

Chapter 9, Native services describes the built-in DNS, NTP and mailing services of PNS.

Chapter 10, Local firewall administration explains how to manage PNS from a local console.

Chapter 11, Key and certificate management in PNS introduces the use and management of certificates.

Chapter 12, Clusters and high availability introduces the use and management of PNS clusters.

Chapter 13, Advanced MS and Agent configuration discusses various advanced topics.

Chapter 14, Virus and content filtering using CF discusses the concepts, configuration, and use of the Content Filtering framework and the related modules.

Chapter 15, Connection authentication and authorization details the authentication and authorization services provided by PNS and the Authentication Server.

Chapter 16, Virtual Private Networks how to build encrypted connections between remote networks and hosts using virtual private networks (VPNs).

Chapter 17, Integrating PNS to external monitoring systems describes how to integrate PNS to your monitoring infrastructure.

Appendix A, Keyboard shortcuts in Management Console describes the keyboard shortcuts available in Management Console.

Appendix B, Further readings is a list of suggested reference materials in different PNS and network security related fields.

Appendix C, Proxedo Network Security Suite End-User License Agreement includes the text of the End-User License Agreement applicable to PNS products.

Appendix D, Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License includes the text of the Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License applicable to The Proxedo Network Security Suite 2 Administrator Guide.

2. Target audience and prerequisites

This guide is intended for use by system administrators and consultants responsible for network security and whose task is the configuration and maintenance of PNS firewalls. PNS gives them a powerful and versatile tool to create full control over their network traffic and enables them to protect their clients against Internet-delinquency.

This guide is also useful for IT decision makers evaluating different firewall products because apart from the practical side of everyday PNS administration, it introduces the philosophy behind PNS without the marketing side of the issue.

The following skills and knowledge are necessary for a successful PNS administrator.

Skill Level/Description
Linux At least a power user's knowledge is required.
Experience in system administration Experience in system administration is certainly an advantage, but not absolutely necessary.
Programming language knowledge It is not an explicit requirement to know any programming languages though being familiar with the basics of Python may be an advantage, especially in evaluating advanced firewall configurations or in troubleshooting misconfigured firewalls.
General knowledge on firewalls A general understanding of firewalls, their roles in the enterprise IT infrastructure and the main concepts and tasks associated with firewall administration is essential. To fulfill this requirement a significant part of Chapter 3, Architectural overview in the PNS Administrator's Guide is devoted to the introduction to general firewall concepts.
Knowledge on Netfilter concepts In-depth knowledge is strongly recommended; while it is not strictly required definitely helps understanding the underlying operations and also helps in shortening the learning curve.
Knowledge on TCP/IP protocol High level knowledge of the TCP/IP protocol suite is a must, no successful firewall administration is possible without this knowledge.

Table 1. Prerequisites


3. Products covered in this guide

The PNS Distribution DVD-ROM contains the following software packages:

  • Current version of PNS 2 packages.

  • Current version of Management Server (MS) 2.

  • Current version of Management Console (MC) 2 (GUI) for both Linux and Windows operating systems, and all the necessary software packages.

  • Current version of Authentication Server (AS) 2.

  • Current version of the Authentication Agent (AA) 2, the AS client for both Linux and Windows operating systems.

For a detailed description of hardware requirements of PNS, see Chapter 1, System requirements in Proxedo Network Security Suite 2 Installation Guide.

For additional information on PNS and its components visit the PNS website containing white papers, tutorials, and online documentations on the above products.

4. Contact and support information

This product is developed and maintained by BalaSys IT Security..

Contact: 


         BalaSys IT Security.
         4 Alíz Street
         H-1117 BudapestHungary
         Tel: +36 1 646 4740
         E-mail: 
         Web: http://balasys.hu/
       

4.1. Sales contact

You can directly contact us with sales related topics at the e-mail address , or leave us your contact information and we call you back.

4.2. Support contact

To access the BalaSys Support System, sign up for an account at the BalaSys Support System page. Online support is available 24 hours a day.

BalaSys Support System is available only for registered users with a valid support package.

Support e-mail address: .

4.3. Training

BalaSys IT Security. holds courses on using its products for new and experienced users. For dates, details, and application forms, visit the https://www.balasys.hu/en/services#training webpage.

5. About this document

This guide is a work-in-progress document with new versions appearing periodically.

The latest version of this document can be downloaded from the Documentation Page.

5.1. Feedback

Any feedback is greatly appreciated, especially on what else this document should cover, including protocols and network setups. General comments, errors found in the text, and any suggestions about how to improve the documentation is welcome at .

Chapter 1. Introduction

This chapter introduces the Proxedo Network Security Suite (PNS) in a non-technical manner, discussing how and why it is useful, and what additional security it offers to an existing IT infrastructure.

1.1. What PNS is

PNS provides complete control over regular and encrypted network traffic, with the capability to filter and also modify the content of the traffic.

PNS is a perimeter defense tool, developed for companies with extensive networks and high security requirements. PNS inspects and analyzes the content of the network traffic to verify that it conforms to the standards of the network protocol in use (for example, HTTP, IMAP, and so on). PNS provides central content filtering including virus- and spamfiltering at the network perimeter, and is capable of inspecting a wide range of encrypted and embedded protocols, for example, HTTPS and POP3S used for secure web browsing and mailing. PNS offers a central management interface for handling multiple firewalls, and an extremely flexible, scriptable configuration to suit divergent requirements.

The most notable features of PNS are the following:

Complete protocol inspection: In contrast with packet filtering firewalls, PNS handles network connections on the proxy level. PNS ends connections on one side, and establishes new connections on the other; that way the transferred information is available on the device in its entirety, enabling complete protocol inspection. PNS has inspection modules for over twenty different network protocols and can inspect 100% of the commands and attributes of the protocols. All proxy modules understand the specifications of the protocol and can reject connections that violate the standards. Also, every proxy is capable to inspect the TLS- or SSL-encrypted version of the respective protocol.

Unmatched configuration possibilities: The more parameters of a network connection are known, the more precise policies can be created about the connection. Complete protocol inspection provides an immense amount of information, giving PNS administrators unprecedented accuracy to implement the regulations of the security policy on the network perimeter. The freedom in customization helps to avoid bad trade-offs between effective business-processes and the required level of security.

Reacting to network traffic: PNS cannot only make complex decisions based on information obtained from network traffic, but is also capable of modifying certain elements of the traffic according to its configuration. This allows to hide data about security risks, and can also be used to treat the security vulnerabilities of applications protected by the firewall.

Controlling encrypted channels: PNS offers complete control over encrypted channels. The thorough inspection of embedded traffic can in itself reveal and stop potential attacks like viruses, trojans, and other malicious programs. This capability of the product provides protection against infected e-mails, or websites having dangerous content, even if they arrive in encrypted (HTTPS, POP3S, or IMAPS) channels. The control over SSH and SSL traffic makes it possible to separately handle special features of these protocols, like port- and x-forwarding. Furthermore, the technology gives control over which remote servers can the users access by verifying the validity of the server certificates on the firewall. That way the company security policy can deny access to untrusted websites having invalid certificates.

Centralized management system: The easy-to-use, central management system provides a uniform interface to configure and monitor the elements used in perimeter defense: PNS devices, Content Filtering servers, as well as clusters of these elements. Different, even completely independent groups of PNS devices can be managed from the system. That way devices located on different sites, or at different companies can be administered using a single interface.

Content Filtering on the network perimeter: PNS provides a platform for antivirus engines. Using PNS’s architecture, these engines become able to filter data channels they cannot access on their own. PNS’s modularity and its over twenty proxy modules enable virus- and spamfiltering products to find malicious content in an unparalleled number of protocols, and their encrypted versions.

Single Sign On authentication: Linking all network connections to a single authentication greatly simplifies user-privilege management and system audit. PNS’s single sign on solution is a simple and user-friendly way to cooperate with Active Directory. Existing LDAP, PAM, AD, and RADIUS databases integrate seamlessly with PNS’s authentication module. Both password-based and strong (S/Key, SecureID, X.509, and so on) authentication methods are supported. X.509-based authentication is supported by the SSH proxy as well, making it possible to use smartcard-based authentication mechanisms and integrate with enterprise PKI systems.

1.2. Who uses PNS?

The protection provided by the PNS application-level perimeter defense technology satisfies even the highest security needs. The typical users of PNS come from the governmental, financial, and telecommunication sectors, including industrial companies as well. PNS is especially useful in the following situations:

  • to protect networks that handle sensitive data or provide critical business processes

  • to solve unique, specialized IT security problems

  • to filter encrypted channels (for example, HTTPS, POP3S, IMAPS, SMTPS, FTPS, SFTP, and so on)

  • to perform centralized content filtering (virus and spam) even in encrypted channels

  • to filter specialized protocols (for example, Radius, SIP, SOAP, SOCKS, MS RPC, VNC, and so on)

Chapter 2. Concepts of the PNS Gateway solution

This chapter provides an overview of the PNS Gateway solution, introduces its main concepts and explains the relationship of the various components.

2.1. Main components of the PNS Gateway solution

A typical PNS Gateway solution consists of the following components:

  • One or more PNS firewall hosts. Application-level Gateway is inspecting and analyzing all connections.

  • A Management Server (MS)

    MS is the central managing server of the PNS Gateway solution. MS stores the settings of every component, and generates the configuration files needed by the other components. A single MS can manage the configuration of several PNS firewalls — for example, if an organization has several separate facilities with their own firewalls, each of them can be managed from a central Management Server.

  • One or more desktop computers running the Management Console (MC), the graphical user interface of MS

    The PNS administrators use this application to manage the entire system.

  • Transfer agents

    These applications perform the communication between MS and the other components.

  • One or more Content Filtering (CF) servers

    CF servers can inspect and filter the content of the network traffic, for example, using different virus- and spamfiltering modules. CF can inspect over 10 network protocols, including encrypted ones as well. For example, SMTP, HTTP, HTTPS, and so on.

  • One or more Authentication Server (AS)

    AS can authenticate every network connection of the clients to a variety of databases, including LDAP, RADIUS, or TACACS. Clients can also authenticate out-of-band using a separate Authentication Agent.

Note
The name of the application effectively serving as the Application-level Gateway component of Proxedo Network Security Suite is PNS, commands, paths and internal references will relate to that naming.

The following figure shows how these components operate:

The architecture of the PNS firewall system

Figure 2.1. The architecture of the PNS firewall system


2.1.1. PNS

The heart of the PNS-based firewall solution is the firewalling software itself, which is a set of proxy modules acting as application layer gateways. PNS is an application proxy firewall. For details on the architecture of PNS itself, see Section 2.2, The concepts and architecture of PNS firewalls.

PNS must be installed on an Ubuntu-based operating system (Ubuntu 22.04 LTS) which installs automatically when booting from the PNS installation media.

2.1.2. Management Server (MS)

The Management Server (MS) handles the configuration tasks of the entire solution. Your firewall administrators use the Management Console (MC) application on their desktop to access MS and modify the configuration of your firewalls. MS is the central command center of the solution: it stores and manages the configuration of PNS firewall hosts.

The real power of MS surfaces when more than one PNS firewall has to be administered: instead of configuring the different firewalls individually and manually, you can configure them at a central location with MS, and upload the configuration changes to the firewalls. As MS stores the configuration of every firewall, you can backup the configuration of your entire firewall system. In case of an emergency, you can restore the configuration of every firewall with a few clicks.

2.1.3. Transfer Agent

Technically, MS does not communicate directly with the PNS host: all communication is done through the PNS Transfer Agent application, which is responsible for transporting configuration files to the managed hosts, running MS-initiated commands, and reporting the firewall configuration and other related information to MS. The PNS Transfer Agent is automatically installed on every PNS host. The communication is secured using Secure Socket Layer (SSL) encryption. The communicating hosts authenticate each other using certificates. For more information, see Section 13.1.1.5, Configuring authentication settings in MS.

Communication between the agents and MS uses TCP port 1311. If PNS and MS are installed on the same host, the communication between the transfer agent and the MS server uses UNIX domain sockets.

Warning

Agent connections must be enabled on every managed host, otherwise MS cannot control the hosts.

By default, the MS host initiates the communication channel to the agents, but the agents can also be configured to start the communication, if required.

2.1.4. Management Console (MC)

The Management Console (MC) is the graphical interface to Management Server (MS). A single MS engine can manage several different PNS firewalls. MC is designed so, that almost all administration tasks of PNS can be accomplished with it and therefore no advanced Linux skills are required to manage the firewall.

Note

MC can connect to the MS host remotely, even over the Internet. All connections between MC and MS are SSL-encrypted, and use TCP port 1314.

MC can only alter configurations stored in the MS database. It does not directly communicate with the firewall hosts.

Communication between MC, MS, and PNS

Figure 2.2. Communication between MC, MS, and PNS


MC can be installed on the following platforms:

  • Microsoft Windows Vista or later

  • Linux

2.1.5. Authentication Server (AS)

PNS can authenticate every connection: it is a single sign-on (SSO) authentication point for network connections. During authentication, PNS communicates with the Authentication Agent (AA) application that runs on the client computers.

However, PNS does not have database access for authentication information such as usernames, passwords and access rights. It operates indirectly with the help of authentication backends through an authentication middleware, the Authentication Server (AS). To authenticate a connection, PNS connects to AS, and AS retrieves the necessary information from a user database. AS notifies PNS about the results of the authentication, together with some additional data about the user that can be used for authorization.

The operation of AS

Figure 2.3. The operation of AS


AS supports the following user database backends:

  • plain file in Apache htpasswd format

  • Pluggable Authentication Module (PAM) framework

  • RADIUS server

  • LDAP server (plain BIND, password authentication, or with own LDAP scheme)

  • Microsoft Active Directory

AS supports the following authentication methods:

  • plain password-based authentication

  • challenge/response method (S/KEY, CryptoCard RB1)

  • X.509 certificates

  • Kerberos 5

2.1.6. The concept of the CF framework

CF is not a Content Filtering engine, it is a framework to manage and configure various third-party Content Filtering modules (engines) from a uniform interface. PNS uses these modules to filter the traffic. These modules run independently from PNS. They do not even have to run on the same machines. PNS can send the data to be inspected to these modules, along with configuration parameters appropriate for the scenario. For example, a virus filtering module can be used to inspect all files in the traffic, but different parameters can be used to inspect files in HTTP downloads and e-mail attachments. Also, different scenarios can use a different set of modules for inspecting the traffic. Using the above example, HTTP traffic can be inspected with a virus filter, a content filter, and all client-side scripts can be removed. E-mails can be scanned for viruses using the same virus filtering module (but possibly with stricter settings), and also inspected by a spam filtering module.

Interaction of PNS and CF

Figure 2.4. Interaction of PNS and CF


The interaction of PNS and CF takes place as follows:

  • A PNS proxy can send data for further inspection to a CF rule group.

  • A rule group is used to define a scenario (using a set of router rules).

  • The router rules of the scenario are condition – action pairs that determine how a particular object should be inspected. This decision is based on meta-information about the traffic or objects received from PNS and on information collected by CF.

    • The condition can be any information that PNS/CF can parse, for example, the client's IP address, the MIME-type of the object, and so on.

    • The action is either a default action (such as ACCEPT or REJECT), or a scanpath — a list of Content Filtering module instances (the modules and their settings corresponding to the scenario) that will inspect the traffic. Rule groups have a scanpath configured as default, but the routers in the group can select a different scanpath for certain conditions.

The examples demonstrated on Figure 2.5, Content Filtering scenarios in CF can be translated to the CF terms defined in the previous paragraph as follows:

Content Filtering scenarios in CF
Content Filtering scenarios in CF

Figure 2.5. Content Filtering scenarios in CF


  1. There are two rule groups (scenarios) defined, one for HTTP traffic, one for SMTP.

  2. The Router rules formulate a scanpath in the HTTP rule group.

  3. The scanpath includes module instances of a virus filtering, a content filtering, and an HTML module that are configured to remove all scripts.

This is only a basic example, but further router rules can be used to optimize the decisions. For example, it is unreasonable to remove client-side scripts in non-HTML files that are downloaded, and so on. Similarly, another rule group corresponds to the SMTP scenario, with a scanpath including a virus filtering and a spam filtering module instance.

The whole process is summarized in the following procedure.

2.1.6.1. Procedure – Content Filtering with CF

  1. A PNS proxy sends the traffic to be inspected to an appropriate CF rule group.

  2. CF evaluates the router conditions of the rule group. If no condition is fulfilled, the action set as default (a default scanpath, or an ACCEPT/REJECT) is performed. Otherwise, the action/scanpath specified for the condition is followed.

  3. The traffic is inspected by the module instances specified in the selected scanpath. A module instance can be used in multiple scanpaths, with different parameters in each one.

  4. The processed traffic is returned to PNS.

2.1.6.2. Supported modules

The CF framework was designed to allow the easy and fast integration of various third-party Content Filtering tools. Currently the following modules are supported:

Some of the listed modules must be licensed separately from PNS/CF. For details, contact your distributor.

2.1.7. Virtual Private Networking (VPN) support

PNS uses strongSwan to support native Linux IPSec solutions, and also supports OpenVPN (an SSL-based VPN solution). PNS supports both transport and tunnel mode VPN channels. Tunnel authentication is possible with X.509 certificates and with pre-shared keys (PSK). IPSec settings can be negotiated manually, or by using Internet Key Exchange (IKE).

Virtual Private Networks

Figure 2.6. Virtual Private Networks


2.1.8. Native services

Native services provide a limited number of server-like features in PNS. Their use is optional and depends on the needs and security requirements of your organization. The use of Network Time Protocol (NTP) and Bind is recommended, while Postfix is useful for managing mail traffic from various firewall components locally.

These services are called native because they are installed with PNS and are available by default. They are implementations of the actual Linux services of the same names. These services provide networking services that are either difficult to implement with application proxies (or at the packet filter level) or provide services for the firewall itself. For more information on these services, see Chapter 9, Native services.

  • NTP: PNS hosts can function both as a Network Time Protocol (NTP) client and server. Time synchronization among the PNS hosts is very important for correct logging entries. Once the firewall's time is correctly synchronized, it can act as the authentic time source for its internal networks.

  • DNS: PNS features a fully functional ISC BIND 9 DNS server. It is optional and definitely not mandatory to use if security regulations explicitly prohibit the installation of non-firewall software on the firewall machine. However, in small and mid-sized networks, it can be beneficial to have a built-in name server, if it is solely used as a forward–only DNS server.

  • SMTP: PNS uses Postfix as the built-in SMTP server component. Postfix is used for SMTP queuing. PNS also has an application proxy for inspecting SMTP traffic, while CF can be used to perform virus, spam, and content-based filtering on the SMTP traffic. The primary role of this Postfix service is to provide a Mail Transport Agent (MTA) for the firewall itself: a number of mail messages can originate from the firewall, and these messages are delivered using the Postfix service. Although the Postfix service is a fully functional MTA in PNS, it is not intended to be a general purpose mail server solution for any organization.

2.1.9. High Availability

PNS supports multi-node (2+) failover clustering, as well as load balance clusters (most load balance configurations use external devices). Clustering support must be licensed separately. PNS supports the following failover methods:

  • Transferring the Service IP address

  • Transferring IP and MAC address

  • Sending RIP messages

For more information see Chapter 12, Clusters and high availability.

2.1.10. Operating system

PNS runs on Ubuntu-based operating systems. Currently it supports Ubuntu 22.04 LTS. You can either install the PNS packages on an existing Ubuntu server installation from the official BalaSys APT repositories, or use our installation media to install a minimal Ubuntu 22.04 LTS server and the PNS packages.

2.2. The concepts and architecture of PNS firewalls

The following sections discuss the main concepts of PNS firewalls.

2.2.1. Access control

A firewall controls which networks and hosts can be accessed, and who can access them. To create traffic rules, first you must accurately define the networking environment of PNS, then you can apply access control on the traffic. This can be achieved using zones and rules.

Zones consist of one or more IP subnets that PNS handles together. By default, there is only a single zone: the IP network 0.0.0.0/0, which practically means every available IP addresses (that is, the entire Internet). You can organize zones into a hierarchy to reflect your network, or the structure of your organization.

Although zones consist of IP subnets and/or individual IP addresses, zone organization is independent of the subnetting practices of your organization. For example, you can define a zone that contains the 192.168.7.0/24 subnet and it can have a subzone with IP addresses from the 10.0.0.0/8 range, and the single IP address of 172.16.54.4/32. For details on zones, see Section 6.2, Zones.

2.2.2. Operation modes of PNS

The first line of network defense is a packet filter that blocks traffic based on the IP address or TCP/UDP port number of the source (that is, the client) or the destination (that is, the server) of the connection. That way, more thorough analysis, such as traffic inspection or Content Filtering is performed only on traffic that is permitted at all. This technology using both packet filtering and application proxying together is called multilayer filtering.

  • Packet filtering: Traffic that can be filtered based on IP and TCP/UDP header information can be blocked at the packet filter level. Likewise, it is possible to forward traffic at the packet filter level without analyzing them with application proxies. For such traffic, PNS operates like an ordinary packet filtering firewall. Forwarding traffic at the packet filter level may be desirable special protocols that cannot be proxied, or if proxying causes performance problems in the connection, or in case of non-TCP/UDP or bulk traffic. PNS provides a number of built-in, protocol-specific proxy classes for the most typical protocols, and it has a generic proxy for protocols not supported by the built-in proxies. Packet filter level forwarding is not recommended, unless it is absolutely unavoidable.

    Application proxies provide a higher level of security. Packet filters are the first line of defense, they can be used to block unwanted traffic. What is not blocked by default, on the other hand, should be filtered by the appropriate application proxies.

  • Traffic proxying: Application-level services inspect the traffic on protocol level (Layer 7 in the OSI model). PNS provides a generic proxy, called PlugProxy that does not perform special data analysis, but can be used to proxy the traffic. Application proxies always provide an additional level of filtering over packet filters.

2.2.3. Proxying connections

PNS is a proxy gateway. It separates the connection between the client and the server into two separate connections: one between the client and PNS, and another between PNS and the server. PNS receives the incoming client connection requests, inspects them, and transfers them to the server. PNS also receives the replies of the server, inspects them, and replies to the client instead of the server. That way PNS has access to the entire network communication between the client and the server, and can enforce protocol standards and the security policy of your organization (for example, permit only specific clients to access the server, or enforce the use of strong encryption algorithms in the connection).

Proxying can take two basic forms:

  • Non-transparent: In case of non-transparent proxying, client connections target PNS instead of their intended destination.This solution usually requires some client-side setup, for example, to configure the proxy settings in the web browser of the client.

  • Transparent: To integrate to your network environment easily, PNS can operate transparently. In case of transparent proxying, the client connections target the intended destinations server, and PNS inspects the network traffic directly. The client and the server do not detect that PNS mitigates their communication. In case of transparent proxying, no client side setup is required. This means that you do not have to modify the configuration of your clients and servers when PNS is integrated into your network: PNS is invisible for the end user.

2.2.4. Traffic analysis with proxies

The traffic in a connection usually consists of two parts:

  • control information (for example, headers and metainformation)

  • data (the payload)

The protocol proxies of PNS analyze and filter the control part of the traffic, but — in most cases — ignore the payload. (The antivirus and spam-filtering modules of CF inspect the payload.) PNS proxies can thoroughly inspect the protocol headers to ensure compliance to the protocol, disable the use of prohibited options, and so on. PNS can handle commonly used protocols, including:

  • FTP/FTPS

  • HTTP/HTTPS

  • IMAP/IMAPS

  • POP3/POP3S

  • SIP

  • SMTP/SMTPS

  • SQLNet

  • SSH

  • SSL/TLS

  • Telnet

  • VNC

Every protocol proxy can handle the SSL/TLS encrypted version of the protocol, and inspect the embedded traffic, giving control over HTTPS, SMTPS, and other connections.

For more information on supported protocols and for a complete list of proxies, see Proxedo Network Security Suite 2 Reference Guide.

2.2.5. Proxy customization

The default settings of the protocol proxies of PNS ensure that the traffic complies with the relevant RFC of the given protocol. To provide flexibility, and to solve a wide variety of custom scenarios, you can customize the proxies and change their parameters to best suit your environment and your security requirements. For example, it is possible to:

  • disable selected commands in FTP

  • modify the transferred headers in HTTP

  • permit using only selected web browsers

  • specify which encryption algorithms are permitted in SSL/TLS

In addition, the proxies in PNS are fully scriptable, and can be programmed in Python to perform any custom functionality. For information on customizing proxies, see Section 6.6.1, Customizing proxies, and Proxedo Network Security Suite 2 Reference Guide.

2.2.6. Modular architecture

Today, network traffic often uses more than a single protocol: it embeds another protocol into a transport protocol. For example, HTTPS is HTTP protocol embedded into the Secure Socket Layer (SSL) protocol. SSL encrypts HTTP traffic and many firewalls simply permit encrypted traffic pass without thorough inspection. This is not an optimal solution from a security aspect, and PNS has a better solution to this problem: it decrypts and inspects the SSL traffic, and passes the data stream to an HTTP proxy to inspect it. This modular architecture (that is, proxies can be stacked into each other, or even chained together for sequential protocol analysis) allows for sophisticated inspection of complex traffic, for example, to perform virus filtering in HTTPS, or spam filtering in POP3S traffic. An integrated category-based URL filtering solution is also available with the smaller-sized, optimized database for usual scenarios, which requires 1 GB storage space and 300 MB daily update traffic, and with the more extensive normal database for more extended scenarios, which needs 6 GB storage space and 2 GB daily update traffic.

Chapter 3. Managing PNS hosts

There are two ways to manage PNS firewalls:

  • using the Management Server (MS) and the Management Console (MC) graphical user interface (GUI)

    PNS and MS are designed to be configured using the Management Console (MC). The Proxedo Network Security Suite 2 Administrator Guide focuses primarily on the preferred MC-based administration method.

  • manually editing the configuration files of every host

    This method requires advanced Linux skills and deep technical knowledge about how PNS works. For more information on this procedure, see Chapter 10, Local firewall administration.

Note

You cannot mix the two administration methods: once you start editing configuration files manually, you cannot continue or revert to using MC, unless you rebuild the configuration from scratch.

3.1. MS and MC

MC itself is just a graphical frontend to MS. It is MS that stores configuration information and manages connections with the agents on managed hosts. The MS-based firewall management can only be performed through MC; there is no console alternative. MC is not “bound” to a particular MS host, as long as the proper username/password pair is known, it can be used to connect to any MS host.

MC can be started the following way:

  • Windows: Locate the PNS folder in the Start menu and click on the MC icon. If no such folder has been created when MC was installed, start MC.exe manually from the installation folder.

  • Linux: Start MC from the Network or Internet menu of your desktop environment, or from the console by executing the following command: ./<installation-directory>/bin/MC.

Selecting the MS engine to connect to

Figure 3.1. Selecting the MS engine to connect to


When you first start MC after the installation, the list of reachable MS hosts is empty, therefore you must define a new host. To define a new host, click New. MC configurations are stored in a folder named .MSgui. This folder (for the Windows version of MC) is created in the installing user's profile directory, which is typically %systemroot%\Documents and Settings\%username%. The name of the file that stores MC configurations is MSgui.conf. The Linux version of MC stores configuration information in the same manner, within the user's home directory.

3.1.1. Procedure – Defining a new host and starting MC

Purpose: 

To define a new host entry and start MC, complete the following steps.

Steps: 

  1. To define a new entry in the list of reachable MS hosts, click New.

    Defining a new host in MC

    Figure 3.2. Defining a new host in MC


  2. Enter a name for the host in the Engine field. It can be an arbitrary string and does not have to be the same as the hostname of the MS Host.

  3. Enter the IP address of the host in the IP address field.

  4. Optional step: Fill the Port field or leave it empty to use the default TCP port 1314.

    You can change the port assignment later, if needed.

  5. Click OK. The new entry is now listed in the Engine list.

  6. To continue with authentication, click OK.

    MC authentication

    Figure 3.3. MC authentication


    By default, the built-in administrator account of MS and therefore PNS is admin. Its MS password was defined during installation.

    The name of the administrator can be changed or additional administrators can be added later through MC. To modify existing users or add new ones, see Section 13.1.1, Configuring user authentication and privileges. To create user accounts with limited privileges (for example, users who can only look at the configuration for auditing purposes, but cannot change anything) see Section 13.1.1.4, Configuring user privileges in MS.

Expected outcome: 

After entering the correct password, if network connectivity is provided, the MC main screen greets you.

Note

When MC connects to a MS engine for the first time, it displays the SSH-style fingerprint of the MS host. During later connections, it checks the fingerprint automatically.

Warning

MC and the MS to be accessed must have matching version numbers. E.g. MS 2.2.1 must be accessed with MC 2.2.1. Login is not permitted if the version number of MS and MC is different.

3.2. MC structure

MC is divided into three main parts, as presented in the figure:

MC main screen

Figure 3.4. MC main screen


Note

For more information on the configuration buttons of the button bar, see Section 3.3.2, Configuration buttons.

The following sections introduce MC components and highlight their main purposes.

3.2.1. Configuration tree

The configuration tree lists the configurable components of a PNS system. Whenever you select an item in the configuration tree, the main workplace displays the configurable parameters of the selected item. The configuration tree is organized hierarchically and this hierarchy maps the management philosophy of PNS.

Configuration tree in MC

Figure 3.5. Configuration tree in MC


The topmost item in the configuration tree is the Site's name that you have entered during MS host installation. There are usually one or more items below it: MS and/or PNS hosts.

In the most basic scenario, where MS is installed on the PNS machine, there is only one machine listed. Note that in this case the name that appears here is the name of the MS host entered during installation. Under each host, a varying number of configuration components are listed.

By default two components are available for each host:

  • Management agents

  • Networking

Because the MS Host in this example is a Management server too, it has a third component for configuring management server parameters.

Each site, host, and component has status icons or leds on its left. These are described in detail in Section 3.3.6, Status indicator icons.

The number of components increases as you start the real work: many services have standalone configuration components that you have to add to the configuration tree to use them.

The forthcoming chapters deal with these components in detail.

3.2.1.1. Site

The biggest configuration entity most PNS systems consist of is the Site. A Site is a collection of network entities that belong together from a networking aspect.

From the firewall administration point of view, the Site is the collection of the machine nodes. If the company is large and/or has geographically separated subdivisions, more than one firewall may be required. If they are all administered by a single (team of) administrator(s), they can all fall under the supervision of a single MS host. In this case, the Site consists of a MS Host and a number of firewalls.

The reverse of this setup is not possible: a single PNS firewall cannot be managed by more than one MS host, because this setup would cause indefinite and confused firewall states.

If you purchased the High Availability (HA) module for PNS too and therefore have two firewall nodes clustered, they can be administered as a single MS host. Clusters are described in detail in Chapter 12, Clusters and high availability.

MC machines do not belong to the Site(s) they administer technically, though physically they are located in close proximity to them.

A Site is a typical container unit and the components of a Site (that is, the Hosts) share only a few but important properties:

  • Zone configuration

    All Hosts (firewalls) belonging to the same Site share a common zone configuration. For more information on zones, see Chapter 6, Managing network traffic with PNS.

  • Public key infrastructure (PKI) settings

    PNS makes heavy use of PKI, for example, in securing communication between MS and the firewalls, in authenticating IPSec VPN tunnels, proxying SSL-encrypted traffic.

Although a Site can be managed by a single MS Host only, a MS Host can manage more than one site.

Tip

A possible reason for a company to create more than one site may be to maintain different Zone structures for different sets of firewalls. This is a frequent requirement for geographically distributed corporations that have separated network segments defended by PNS firewalls, but want to maintain central (MS-based) control over their firewalls.

Another possible user of multi-site, single-MS setups is a support company that performs outsourced PNS administration for a number of clients. In this scenario all business clients are ordered into separate sites, but all these sites are managed by the support company's single MS Host.

3.2.1.2. Host

A Site is composed of one or more Hosts. Hosts can be the following items:

  • PNS firewalls,

  • CF hosts,

  • AS hosts, and

  • MS-managed hosts.

At the very minimum setup, when PNS and MS are installed on the same machine, there is one Host registered for the given Site. The number of PNS firewall nodes per Site is only limited by the number of licenses purchased. With the exception of Zone, PKI and Class Editor settings, the configuration parameters are always per Host.

To display system statistics for a Host component (MS or PNS), click on the name of the Host. The statistics are displayed under the Host tab. The following statistics are available:

  • processor

  • load average

  • uptime

  • memory usage: RAM and SWAP

  • only on PNS hosts:

    • the version number of PNS

    • the number and status of running PNS Instances, Processes and Threads

  • the validity and size of the product licenses (PNS, MS, and so on) available on the host.

    MC displays a warning if a license expires soon, and an alert e-mail can be configured as well. For details on configuring e-mail alerts for license expiry, see Procedure 11.3.8.9, Monitoring licenses and certificates.

    Note

    To access license information from the command line, login to the host and enter:

    /usr/lib/vms-transfer-agent/MS_program_status hoststat
Host statistics

Figure 3.6. Host statistics


The statistics are automatically refreshed every 30 seconds by default.

Tip

Although host statistics can seem a less important, auxiliary service, it is extremely useful when firewalls operate under continuous heavy load and you want to optimize resource allocation.

3.2.1.3. Component

The actual configuration of hosts is performed using configuration components. These components are bound to the specific firewall services. For example, there are separate components for Postfix (Mail transport), for NTP (Time) and for PNS itself. The list of usable components depend on the type of host under configuration. Most components belong to PNS firewall hosts.

By default, there are two components added for each host: Networking and Management Agent. For MS hosts the Management Server component is added automatically.

3.2.1.3.1. Procedure – Adding new configuration components to host

Purpose: 

To add a new confiugration component to a host, complete the following steps.

Steps: 

  1. Select the host you want to add a new component to the Configuration tree.

    Adding new components

    Figure 3.7. Adding new components


  2. Navigate to the Host tab, and under the Components in use section, click New.

  3. Select the configuration component to add from the Components available list.

    Note

    For managing PNS firewall hosts, it is essential to add the Application-level Gateway and the Management Access components, at the very minimum.

    The configuration components are strictly focused on the service they manage and all have a distinctive graphical management interface accordingly. For more information on the different components, see the respective chapters.

    Configuration components

    Figure 3.8. Configuration components


    The following components are available:

    • Authentication Server: Authentication Server (AS)

    • Content Filtering: Content Filtering (CF)

    • Mail transport: POSTFIX

    • Transfer Agent:

    • Management Server: Management Server (MS)

    • Networking:

    • Management Access:

    • System logging: syslog-ng

    • Text editor:

    • Time: NTP

    • VPN:

    • Application-level Gateway:

  4. Select the template to use for the component from the Component templates list.

  5. Depending on the component, either enter default values for the component in the appearing new window or select a default configuration template.

    These built-in templates are configuration skeletons with some default values and options preset. Creating new configuration templates is also possible.

  6. Click OK.

3.2.2. Main workspace

Most of MC is occupied by the main workspace where you can manage the various components of the Configuration tree. The majority of configuration tasks are performed here. The content of this pane depends on which component you select in the configuration tree.

You can add new administrative components to the host at the bottom part of the main workspace.

If you select a different Host in the Configuration tree, the content of the main workspace changes too.

Tip

The keyboard shortcuts of MC are listed in Appendix A, Keyboard shortcuts in Management Console.

3.2.3. Menu & status bars and Preferences

Although most options of the menu bar are available as buttons on other parts of the MC window, there are some special menu points that have no corresponding button in the main workspace. The buttons are described in the specific section which deals with the corresponding MC part they appear in.

3.2.3.1. Procedure – Configuring general MC preferences

Purpose: 

To configure general MC preferences, complete the following steps.

Steps: 

  1. Navigate to Edit > Preferences... and select the General tab.

    Edit > Preferences... > General - Editing MC preferences

    Figure 3.9. Edit > Preferences... > General - Editing MC preferences


  2. Edit confirmation window preferences in the Confirmations section:

    • To display a confirmation window before quitting MC through File > Quit or Ctrl+Q, select Confirm exit.

    • To display a confirmation window before committing the configuration changes to a component, select Confirm commit component.

    • To display a confirmation window before reverting the configuration changes to a component, select Confirm revert component.

    • To display a confirmation window before uploading the configuration changes to a component, select Confirm upload component.

    • Commit and Upload can be combined into a single action, which means that if you want configuration changes to reach the firewall immediately – and not just the MS database – you can do it with a single click. To combine Commit and Upload, select Actions follow dependent components.

  3. Edit tree-related preferences in the Trees section:

    • Expand menu tree.

    • Expand trees in plugins.

  4. Edit text editor font-specific preferences in the Font section:

    To configure the Text editor font of MC, click . Select the font Family, Style and the font Size and click OK.

  5. Edit result dialog-specific preferences in the Result dialog section:

    • Scroll to the last line.

    • Keep the results after closing the window.

  6. The status of the interfaces is automatically updated periodically. To configure the update frequency, edit the Program status section:

    Auto refresh in seconds.

    To turn off auto-refresh, deselect Auto refresh.

  7. Edit the length of the delay after status tooltips are displayed when you hover your mouse over a status led or status icon in the Display status tooltip section:

    Enter the value of the delay in Show in seconds.

    To turn off status tooltips, deselect Show.

    For details on status, see Section 3.3.6, Status indicator icons.

  8. Edit browser-specific preferences in the Browser section:

    The Proxedo Network Security Suite 2 Administrator Guide and Proxedo Network Security Suite 2 Reference Guide are automatically installed with MC in HTML format and are accessible from the Help menu. The guides are opened in the default System defined browser.

    To configure a different browser, select Custom and enter in the Browser path field the full path name of the browser to use.

    Tip

    The latest version of these guides, as well as additional whitepapers and tutorials are available at the Documentation Page.

  9. Edit changelog-specific preferences in the ChangeLog section:

    MS now records the history of configuration changes into a log file. The logs include who and when modified which component of the PNS Gateway system. Component restarts and other similar activities are also logged, and the administrators can add comments to every action to make auditing easier. By default, MC displays a dialog automatically to comment the changes every time the MS configuration is modified, or a component is stopped, started, or restarted. The changelogs cannot be modified later.

    For details on writing changelog comments, see Procedure 3.3.4, Recording and commenting configuration changes.

    To configure when MS automatically opens the New changelog entry window, change Edit changelog when to:

    • Commit if you want to automatically open the New changelog entry window after committing the configuration changes to a component.

    • Quit if you want to automatically open the New changelog entry window only before quitting MC through File > Quit or Ctrl+Q.

    • Never if you never want to automatically open the New changelog entry window.

3.2.3.2. Procedure – Configuring PNS Class Editor preferences

Purpose: 

To configure PNS Class Editor preferences, complete the following steps.

Steps: 

  1. Navigate to Edit > Preferences... and select the Application-level Gateway Class Editor tab.

    Edit > Preferences... > PNS Class Editor - Editing PNS Class Editor preferences

    Figure 3.10. Edit > Preferences... > PNS Class Editor - Editing PNS Class Editor preferences


  2. Configure display-related preferences in the Display section:

    • To enable syntax highlighting, select Enable syntax highlighting.

    • To enable syntax validation by checking braces, select Enable braces check.

    • To display line numbers, select Display line numbers.

    • To display right margin, select Display right margin. Enter the column number where you want the right margin line to be displayed in Right margin at column.

  3. Configure tabulation in the Tabs section:

    • Enter the indentation width in spaces in the Tabs width field.

    • For auto-indentation, select Enable auto indentation.

    • For smart behavior for Home and End keys, select Smart Home/End behaviour.

3.2.3.3. Procedure – Configuring PNS Rules preferences

Purpose: 

To configure PNS Rules preferences, complete the following steps.

Steps: 

  1. Navigate to Edit > Preferences... and select the Application-level Gateway Rules tab.

    Editing MC preferences

    Figure 3.11. Editing MC preferences


  2. Configure rule list-related preferences in the Rule list section:

    • To display the complete content of a cell as tooltip, select Show full cell content as tooltip.

    • To wrap cell content if it exceeds a certain length, select Wrap cell content to multiple lines. Define the Maximum number of rows in cell.

3.2.3.4. Procedure – Configuring MS hosts

Purpose: 

To add, delete or edit MS hosts, complete the following steps.

Steps: 

  1. Navigate to Edit > Servers....

    • To add a new host, click New. For details on the configuration steps, see Procedure 3.1.1, Defining a new host and starting MC.

      Note

      MC cannot connect to more than one MS host simultaneously. After adding a new host, MC will not change to that new host, but will stay logged in to the host that you are currently configuring. To configure the new host, navigate to File > Relogin... and login to the new host.

    • To edit an already existing host, click on the name of the host and click Edit.

    • To delete an already existing host, select the name of the host and click Delete.

      Warning

      There is no confirmation window after clicking Delete, the host is deleted instantly. Make sure that you only click Delete if you want to actually delete a host.

  2. Click Close.

3.2.3.5. PKI menu

PKI settings are always site-wide and can be configured graphically using the PKI menu only. For more information on PKI usage under PNS, see Chapter 11, Key and certificate management in PNS.

3.2.3.6. Variables menu

To clarify management, it is possible to define system variables for MS. These variables all have a scope, depending on which component is selected in the Configuration tree when they are declared.

Altogether, variables can work in three scopes that correspond to configuration levels in the Configuration tree: Site, Host and Component.

Tip

Using variables is especially useful in sophisticated, enterprise PNS environments where complex configurations have to be maintained. When referencing variables inside configuration windows, $ characters must precede and follow their names. For example, $autobind-ip$.

By using variables it is simpler and error-free to change a value that is present at many different places. Modifying a corresponding variable results in changed values everywhere they are used.

Note

Instead of variables, it is recommended to use links.

By default there are two host variables defined:

  • Hostname, and

  • autobind-ip.

3.2.3.6.1. Procedure – Defining variables

Steps: 

  1. Select a Site, Host or Component in the configuration tree.

  2. Navigate to Edit > Variables....

  3. To create a new variable, click New.

  4. Enter the Name-Value pair in the respective fields.

  5. Click Close.

3.2.3.6.2. Procedure – Editing variables

Steps: 

  1. Select a Host in the configuration tree.

  2. Navigate to Edit > Variables....

  3. To edit a variable, select the variable and click Edit.

  4. Enter the new Name-Value pair in the respective fields.

  5. Click Close.

3.2.3.6.3. Procedure – Deleting variables

Steps: 

  1. Select a Host in the configuration tree.

  2. Navigate to Edit > Variables....

  3. To delete a variable, select the variable and click Delete.

    Warning

    There is no confirmation window after clicking Delete, the variable is deleted instantly. Make sure that you only click Delete if you want to actually delete a variable.

  4. Click Close.

3.2.3.7. Status bar

The bottom line of MC is called the status bar. When working with configuration components it can be used to check whether changes have already been Committed or there are Unsaved changes. By checking the status, you can determine whether what you see on the MC interface is the same as the information currently stored in the MS configuration database (committed status) or not.

The Status bar

Figure 3.12. The Status bar


3.3. Configuration and Configuration management

Most configuration tasks concerning PNS are component–based and even those that are site-wide, such as Zone manipulation, must be individually uploaded to all firewalls of the given site. Therefore, configuration tasks can be organized into cycles and most elements of these cycles are the same regardless of the component that is configured. In fact, most of the configuration is repetitive and therefore can easily be procedurized.

In this section, after a brief overview of the most typical steps, the configuration process and the tools (buttons) that are used to perform each task are presented.

3.3.1. Configuration process

When you login to MS through MC, first an SSL encrypted channel is built, then firewall configurations currently stored in the MS database are downloaded into MC. When you finish doing configuration changes they are committed back into the MS database. At this point no changes are made to the firewall(s); only the database on the MS host is modified. It takes a separate action, an upload issued to actually propagate changes from the database down to the firewall(s). With this upload action the configuration changes get integrated into the configuration files on the PNS machine(s). For final activation, a reload or restart (depending on the situation and the service being modified) is needed to activate the changes.

A complete configuration cycle consists of the steps described in the forthcoming sub-sections.

3.3.1.1. Procedure – Configuring PNS - the general process

  1. Select the component that you want to configure in the Configuration tree.

  2. Perform the actual configuration changes on the component. For details, see the relevant chapters.

  3. Commit changes to the XML database of MS. Otherwise, the changes are lost when you navigate to another component.

    Write a brief summary about the changes into the Changelog. For details, see Procedure 3.3.4, Recording and commenting configuration changes.

  4. To activate the changes, upload them to the affected PNS firewall hosts from the MS database. MS converts the changes to the proper configuration file format and sends them to the transfer agents on the firewall nodes. The changes are applied on the firewall nodes.

  5. Reload the altered configurations on the firewalls, or restart the corresponding services.

Note

Not all of these steps are performed in each configuration cycle. Service reloads or restarts are typically postponed as long as possible and are likely to be performed only after all configuration tasks with the various service components are finished.

3.3.2. Configuration buttons

Most administration commands for the configuration tasks can be executed from either the menus or the buttons in the Button bar. The number of buttons visible varies based on the component that you have selected in the Configuration tree.

The Button bar

Figure 3.13. The Button bar


3.3.2.1. Commit and Revert

The Commit changes and Revert changes buttons are always visible, at the minimum.

Commit is used when you finish a (set of) configuration changes and want to save these changes to the XML database of MS.

Revert serves the opposite purpose: before committing changes to the MS database, it is possible to clear, to undo them in MC.

Note

It is very important to remember that Revert is limited to MC, it cannot clear configuration changes that are already committed to the MS database. Those changes can be undone by performing a new round of changes in MC and then committing these changes again.

Both Commit and Revert are component–focused controls. Consequently, before you select another component from the Configuration tree, you must commit the changes in the current component, otherwise they are lost. In such cases, MC displays the following warning:

Warning: commit the changes before leaving the component

Figure 3.14. Warning: commit the changes before leaving the component


3.3.2.2. Upload current configuration

Upload is used to upload the configuration changes committed to the configuration database on the MS Host further to the corresponding PNS firewall(s).

Tip

Commit and Upload can be combined into a single action, which means that if you want configuration changes to reach the firewall immediately – and not just the MS database – you can do it with a single click. To combine Commit and Upload, navigate to View > Preferences and select Actions follow dependent components.

3.3.2.3. Control service

Under Control service, you can Reload or Restart services so that they reread the new configuration files that are already on the corresponding PNS firewalls after a successful Commit / Upload cycle.

Restarting or Reloading the given service depends on the type of service (some cannot be reloaded, only restarted) and the intended outcome.

After clicking Control service, the following actions are available:

The service control dialog

Figure 3.15. The service control dialog


Note

Besides Restart and Reload, there are also Start and Stop functions available here to start or stop services.

3.3.2.4. View and Check current configuration

View current configuration and Check current configuration are both used to retrieve information on the current state of the PNS firewall(s).

View current configuration displays the configurations of the component selected in the Configuration tree on the selected host. This information comes from the MS configuration database, which is not necessarily the same as the actual settings on the selected host – when changes are already committed, but not yet uploaded. For example, if you select the MS_Host > Networking component and then click View current configuration, you will see the following:

Networking configuration on MS_Host

Figure 3.16. Networking configuration on MS_Host


It is a file-by-file listing of the active configuration on the selected host. Note that it is not necessarily the same configuration that is stored in the MS database: after a commit but prior to an upload event they can differ significantly. To query this difference, click Check current configuration. Using the Linux diff utility by default, it compares configurations stored in the MS database with the configurations currently active on the selected host.

Checking current configurations

Figure 3.17. Checking current configurations


The differences are marked in red, otherwise you see the normal output of diff, with + and – signs designating data from the host and from the database, respectively. The diff command can be replaced with another utility of your choice under the Management Server component. For details, see Chapter 13, Advanced MS and Agent configuration.

3.3.2.5. Files

Configuration files: 

Files provides further information and configuration options of the files and attributes described in the output window of Check current configuration and the diff command.

Files serves two purposes.: It provides vital information about which configuration files a component (of the Configuration tree) uses and gives chance to modify the properties of the listed files.

For example, in case of the Networking component, the list of used files is the following.

Files used by the Networking component

Figure 3.18. Files used by the Networking component


Apart from the name and location of files, you can retrieve information about the owner, owner group, access rights and file type parameters. The Manage column is very important and has a corresponding checkbox immediately below the file listing: this can be used to control what files MS manipulates on the host machine, if needed.

Note

It is not recommended to take files out of the authority of MS, because it can severely limit the effectiveness of MS–based administration. However, it is possible to do it, if you deselect the checkbox under the Manage column.

File settings: 

To modify the properties of a file, click on the file in the list. The following subwindow opens.

Changing file properties

Figure 3.19. Changing file properties


Warning

There must be a solid reason for changing these properties and you must be prepared for the possible consequences of such actions. A good understanding of Linux is recommended before making changes in file properties.

Consider different if these properties change: 

The third part of the window is for configuring the work of the comparison utility, which is diff by default. You can define what file properties you are interested in when checking for changes.

Configuring diff conditions

Figure 3.20. Configuring diff conditions


Tip

Checking for configuration file differences is beneficial from a security aspect too: it is an additional tool for making sure nobody has altered critical files on the firewall.

Postprocess script: 

At the bottom of the Configuration tab, you can specify a postprocess command that is run after the corresponding configuration file is uploaded to the firewall host. Some services rely heavily on this option. For example, Postfix that runs /usr/sbin/postmap %f as a postprocess command to transport virtual domains and set various access restrictions are properly.

Scripts tab: 

Configuration files under Linux are reread during service reloads or restarts. These actions are performed by running the corresponding scripts exclusively from the /etc/init.d directory. The Scripts tab of the Files window provides an interface where you can check starting scripts and alter and fine-tune them with special Pre upload and Post upload commands. With simple components, such as Networking, these options are rarely used, but in some cases might prove especially useful.

Some components, for example, Text Editor, can manage configuration files that are automatically reloaded. They cannot be restarted after a Commit. To set the status icon of these components to Running, select Configuration automatically runs on the Scripts tab.

3.3.3. Committing related components

Some components are related to or dependent upon each other, meaning that modifying one modifies the other too. If modifying a component affects another component, the status of this related component changes to Invalidated in the Configuration tree. MC automatically handles related components and all actions (Commit, Reload, and so on). Just select the Apply action for the dependent components checkbox in the confirmation dialog of the action.

When reloading or restarting a component related to the Management Access, the skeleton of the Management Access is automatically regenerated.

3.3.4. Procedure – Recording and commenting configuration changes

Purpose: 

MS now records the history of configuration changes into a log file. The logs include who and when modified which component of the PNS Gateway system. Component restarts and other similar activities are also logged, and the administrators can add comments to every action to make auditing easier. By default, MC displays a dialog automatically to comment the changes every time the MS configuration is modified, or a component is stopped, started, or restarted. The changelogs cannot be modified later.

The behavior of the changelog window can be configured in Edit > Preferences > General. For details, see Procedure 3.2.3.1, Configuring general MC preferences.

To review the existing changelog entries, navigate to Management > Changelogs. The window contains two filter bars: the Changelog entries filters for changelog entries, the Components filters inside a single changelog entry, if it contains too many actions. For details, see Section 3.3.10, Filtering list entries.

Steps: 

  1. Optional step: If the New changelog entry window is not configured to display automatically after committing or quitting, you can add a new changelog entry manually. To do this, navigate to Edit > Changelog....

  2. Review the details of the changelog entry. Every changelog entry includes the following information:

    • Date: It is the date when the action was performed.

    • Administrator: It provides the MC username of the administrator who performed the action.

    • Host: It is the PNS Hosts affected by the action.

    • Component: It lists the MC components affected by the action.

    • Action: It is the type of change that was performed.

  3. Enter the following:

    • Summary: Enter a short summary of the changes.

    • Description: Enter a detailed description of the changes.

  4. To save the changelog entry, click Send.

3.3.5. Multiple access and lock management

Most firewalls are administered by a group of administrators and not just by a single individual. In a PNS system each administrator can have his or her own MC console and administrators can be separated geographically. Regardless of their locations they administer the same set of PNS firewalls through a single MS host machine. Therefore, to avoid configuration inconsistency caused by more than one administrator working with the same configuration simultaneously, a configuration lock mechanism ensures that a component's configuration can only be modified by a single administrator at a given time. Locking takes place per component, as soon as you change, for example, a setting in a component, the status bar displays the following string: Unsaved changes and that component is locked for you.

However, modifying a component might make it necessary for other components to be locked as well, in order to prevent configuration from inconsistent changes.

There exist some generic rules that can be applied for managing locking successfully:

  • A component can only be modified if it is not locked by another administrator.

  • Modifying a component results in locking it.

  • Reading a component does not imply locking it.

  • If a site is modified, and is consequently locked, it implies the locking of PacketFilter, Networking and PNS components as well on every host of the site.

  • If a new component is created, it might need to be locked according to the rules. If for example, another administrator is modifying a component that requires the lock of this newly created component to preserve consistent configuration structure.

  • The force of unlock, that is the release of an administrator's lock by someone else, implies shutting down the relevant GUI. Consequently, all the locks will be released, the changes not commited yet will be lost.

    Note that a force unlock has to be consulted with the relevant parties prior to the release of the lock, as it might imply loss of data.

  • If any of the changes to a component are reverted, the lock of the actual component is released.

  • If changes to a component are committed the lock of the actual component is released.

Active locks can be viewed at Management > Locks:

Management > Locks - Viewing active locks

Figure 3.21. Management > Locks - Viewing active locks


The Owner column can take two values:

  • Other

    If someone else is working with the given component.

  • Self

    Indicating your own lock.

The lock placement is automatic. The first administrator who starts modifying a component's settings gets the lock of that actual component. In the Active locks column the exact name of the locked component (Site/Host/Component) is displayed. Locks are cooperative, meaning that any administrator can release any other administrator's locks by selecting the desired component in the Lock management window and then clicking Release. The administrator whose lock is released this way is immediately notified in a warning dialog. Also note that as a result of the release of the forced unlock, the GUI owning the released lock closes without saving any modifications in order to avoid any inconsistency in the configuration.

Note

As releasing the lock of another administrator is a rather radical interaction, concurrent administrators shall discuss lock situations before possibly devastating each other's work.

It is not possible to edit a component that is already locked by someone else, because a notification dialog immediately appears upon trying to change anything inside the given component. The following window will be displayed for any other administrator who wishes to edit the component:

Notification on a locked component

Figure 3.22. Notification on a locked component


As required by the mentioned lock queue mechanism implemented in MS, administrators shall not normally release each other's locks, but they can preregister for future locks while the current lock is active.

The above window will however not disappear, after the locking administrator commits the changes. Other administrators therefore, can either preregister for locking the component by clicking yes or choose not to preregister by clicking no.

In case an administrator preregisters for the locking of an element, the following window appears:

Component waiting for locking

Figure 3.23. Component waiting for locking


As soon as the locking administrator releases the lock, the window Component waiting for lock disappears and the user wishing to edit the component is granted the lock automatically. Normally, this shall not take long.

3.3.6. Status indicator icons

The Configuration tree in MC displays various indicators to provide a quick overview about the status of the managed sites, hosts, and components. Site and Host-level status is indicated by leds, while icons are used to display Component-level status information.

Status indicator icons and leds

Figure 3.24. Status indicator icons and leds


Hovering the mouse cursor over a led or icon displays a tooltip with the full description of the status.

Tip

Status tooltips are displayed for the period configured in Edit > Preferences (for details, see Section 3.2.3, Menu & status bars and Preferences). It is also possible to disable the tooltips altogether.

Tooltip after hovering the mouse over an icon

Figure 3.25. Tooltip after hovering the mouse over an icon


Note

Similar leds are also used throughout MC to display information about the state of various objects, for example, network interfaces, PNS instances, NTP servers, and so on. These are described in their respective sections.

3.3.6.1. Site-level indicators

The Site-level led displays the validity of the certificates used by the PKI system of the Site (for details, see Chapter 11, Key and certificate management in PNS). The led has the following three different states:

  • Green : All certificates are valid.

  • Yellow : One or more certificate will expire soon.

  • Red : One or more certificate has expired.

    Note

    Expired or soon-to-expire certificates are displayed in bold on in the PKI management tabs.

3.3.6.2. Host and cluster-level indicators

The status of the Host is displayed by four different leds. From left to right, these are the following:

All four leds can be Blue, , indicating a partial or unknown status: this appears when the status of the nodes of a cluster differs from each other. For example, the transfer agent led is blue if the agent could establish the connection to only one of the nodes, or if the state of the agent is unknown.

Hovering the mouse pointer above the leds displays a tooltip containing detailed information of the leds, including a summary of the committed/uploaded/and so on components.

Transfer and Monitor connection

These leds indicate the status of the Transfer and Monitoring agent connections, respectively.

  • Green : the agent is connected to the host.

  • Yellow : a connection attempt is in progress.

  • Red : the agent is disconnected.

The management connection leds display unknown status if the given connection is not enabled on the host and is in the disconnected state.

Key distribution

This led indicates the availability of the required certificates and keys on the Host.

  • Green : normal state.

  • Red : the certificates have been modified (for example, refreshed), but the new certificates have not been distributed yet to the Host.

Warning

This status led is especially important, because if the certificates are not distributed properly, MS will not be able to communicate with the Host. For details on distributing certificates, see Section 11.3.5.2, Distribution of certificates.

Configuration

The Component led indicates only the state of the component that is in the worst state. That is, if all components are in normal state, but one of the components is not committed, the led will be in the unsaved state.

3.3.6.3. Component-level status indicators

The status of each component on a host (or cluster) is indicated by a single icon. The components can have the following states:

Modified : The component has been modified, but the changes have not been commited to the MS database yet.

Invalidated : This status indicates that the component has to be updated because of a modification that was performed in another component. For example, committing modifications of the Application-level Gateway component invalidates the Management Access component, because the packet filtering rules have to be regenerated. Invalidated components automatically become modified when they are selected from the Configuration tree.

Committed : The modifications of the component have been saved to the MS database, but the new configuration has not been uploaded to the host. For details, see Section 3.3.2.1, Commit and Revert.

Uploaded : The new configuration has been successfully uploaded to the host. For details, see Section 3.3.2.2, Upload current configuration.

Running : The uploaded configuration has been successfully activated on the host (for example, the service/instance has been restarted/reloaded, and so on). For details, see Section 3.3.2.3, Control service.

Partial : These states appear when the status of the nodes in a cluster differs from each other. For example, the partial uploaded icon indicates that the new configuration was not successfully uploaded to all nodes.

Locked: The component is in use (and has been modified) by another user (for details, see Section 3.3.5, Multiple access and lock management). This status is indicated by the above icons having grayed colors, for example, .

Occassionally it might be required to manually modify the status of a component. This can be done from the Configuration menu through the Mark as Committed and Mark as Running menu items.

Note

Only uploaded components can be marked as running.

3.3.7. Copy, paste and multiple select in MC

MC provides two graphical aids that can help administration when parts of a host's configuration settings have to be recreated on another host.

  • Copy and paste

    Elements of the configuration (for example, network interfaces, proxies, policies, and so on) can be copied and then pasted to another host. This method can also duplicate an element on the same host. All settings of the element are copied to the target host.

    To copy a configuration element, select the file content for example, right-click and select Copy.

    To paste a configuration element, right-click where you want to paste the element and select Paste.

    Warning

    Make sure to verify the settings of the pasted element, especially the parameters that used links.

  • Multiple select

    • To select consecutive multiple components, select the first component, press Shift and click the last component.

    • To select multiple components that are not consecutive, select the first component, press Ctrl and click the next component that you want to select.

    If you select multiple components, after right-clicking, the View, Check, Upload and Control operations are permitted. After clicking one of the options, all configuration files are batch-processed.

    Additionally, a program, for example an archiving script, can be run on the configuration files of all selected components. To do this, either select the command from the drop-down menu or enter the command in the Run program field and click Execute.

    Note

    Multiple selected components can also be copy-pasted.

Selecting multiple components in the Configuration tree

Figure 3.26. Selecting multiple components in the Configuration tree


3.3.8. Links and variables

You have two options to refer to components involved in network configurations.

  • Create links.

  • Use variables.

If you use links, you can manually enter IP addresses or select the link target from a drop-down menu. Using links has the advantage that future changes in the network setup do not influence the operability of the connection.

You can delete the existing links with the Unlink and Unlink as value options.

  • Unlink removes the link connection, meaning that the link field is left empty.

  • Unlink as value deletes the link but leaves the target IP address in the field which will then behave as a manually added address.

You can refer to components with variables. By using variables you change values appearing in several places at once. If you modify a variable, all corresponding values are changed. Variables are denoted with $ characters preceding and following their names.

Example 3.1. Referring to components with variables

The following is an example for a variable:

$autobind-ip$

3.3.9. Disabling rules and objects

During the management and maintenance of the firewall host it is often useful to be able to temporarily turn off certain rules, policies, and so on. In PNS this feature is implemented via the Disable/Enable options of the local menus. To display the local menu of a rule or object, right-click on the object. For example, a rule that is only rarely used, can be simply disabled when it is not required, to be enabled again when it is required. Disabled rules and objects are generated into the configuration file as comments with the # prefix.

Disabled objects can be edited, modified similarly to any other objects. However, their validity (whether for example, the required parameters are filled, their name is unique, and so on) is checked only when they are enabled again.

The following objects can be disabled in the various MC components:

Host:

Disabling a group automatically disables its childrens as well.

Note

Generated rules do not remain disabled after skeleton generation.

Application-level Gateway:

Networking:

Date and time:

Content Filtering:

AS:

  • Routers

IPSec VPN:

  • Connections

Mail transport:

  • Listen interfaces

  • Transport maps

  • Virtual maps

  • Sender address restrictions

  • Recipient address restrictions

3.3.10. Filtering list entries

MC displays information in several places as tables of entries having various parameters or meta-information. Such filter windows are used to display firewall rules, Application-level Gateway logs, Active connections, and so on. The common properties and handling of these tables is summarized in this section.

A filter window

Figure 3.27. A filter window


Filter windows consist of three main parts:

  • Filter bar

  • Table displaying the entries

  • Command bar to perform various actions on the selected entries

The actions available in the command bar are described at the documentation of the actual component using the filter window (for example, Log viewer).

Each entry of the table consists of a single row, with the various parameters displayed in labeled columns.

  • To sort the entries, click on the column headers.

  • To modify the order of the columns, drag the column header to its desired place.

  • To hide a single column, right-click on the column header and click Hide This Column.

  • To configure the columns to be displayed, right-click on the column header and click Set columns..., select a column and move it with the arrow. Click OK.

To filter the entries, use the filter bar located above the table. The Filter type specifies the column to search for the expression (string or regular expression) that you type in the field. To search, click Filter now. To restore the full list, click Clear. To create custom filter expressions, select the Advanced option as Filter type. The filter expressions can also be combined using the logical AND (if all criteria are met) and OR (if any criteria are met) operations. You can run custom filters once (click Ok), or you can bookmark them for repeated use (click Save).

Tip

To display the Advanced filter editor dialog with the latest advanced filter, click ....

Advanced filtering

Figure 3.28. Advanced filtering


Saved filters are also displayed in the Filter type combobox. To manage saved filters (delete, rename and edit), click Edit bookmarks.

Editing bookmarks

Figure 3.29. Editing bookmarks


In some cases (for example, in the Log viewer), you can configure advanced filters to highlight the entries matching the filter expression. To configure highlighting, select Color matching and set the color of the highlighting.

Tip

By assigning different colors to different bookmarked filters, the important elements of the table can be highlighted in several colors.

3.4. Viewing PNS logs

Logs provide an interface for inspecting the log messages collected on a host. To view the logs of a host, select the Host and click View log.

Tip

PNS can also create reports about the transferred traffic. For details, see Section 6.9, Traffic reports.

The log viewer interface consists of a filter bar to select the messages to be displayed, a list of the actual log entries (including meta-information such as timestamp, and so on), and a command bar.

Viewing logs

Figure 3.30. Viewing logs


The following information is displayed about the messages:

  • Timestamp: It shows the exact date when the message was received.

  • Host: It is the host sending the message.

  • Program: It is the application sending the message (for example, cron, MS-engine, and so on).

  • Pid: It is the Process ID of the application sending the message.

  • Message: It is the message itself.

The Log viewer window is a Filter window. Therefore, you can use various simple and advanced filtering expressions to display only the required information. For details on the use and capabilities of Filter windows, see Section 3.3.10, Filtering list entries.

3.4.1. The command bar of the log viewer

The command bar offers various operations to display and export the logs of the host. The following operations are available:

  • Follow: Start the follow mode and monitor the log messages real-time. The list is updated every second.

  • Jump to: Select a time interval and display the log messages received within this interval. To specify an interval, enter its Start date and Start time, and either both End date and End time, or the Interval length. Enter the name of the log file in the Log file field.

    Selecting the log interval to be displayed

    Figure 3.31. Selecting the log interval to be displayed


  • Previous: It displays the log messages of the previous period (using the same interval length).

  • Next: It displays the log messages of the next period (using the same interval length).

  • Stop: It stops the Follow mode.

  • Export: It exports the currently displayed log messages into a file on the local machine that is running MC, using either plain text or CSV file format.

  • Type of messages: The rightmost combobox selects the type of messages that are displayed (for example, MS, and so on).

Chapter 4. Registering new hosts

PNS and MS can be used in several network scenarios. In the simplest case there is only a single firewall host having both PNS and MS services installed. In this case, the communication between MS and the PNS management agents takes place locally, using Unix domain sockets and it does not require network communication setup. However, when the two functions, that is, firewalling and management, are separated and installed on two different machines, the initial communication channel between the two requires manual setup. After successful setup all further communication is initiated automatically without manual interaction. This channel setup is a one-time action, therefore it must be configured separately for each new PNS firewall under the authority of a MS host. This process is called bootstrapping and can be performed similarly to running a wizard. By the end of the bootstrapping process, the new host is added to the host configuration database of the MS host machine.

The connection between MS and PNS can be established in the following ways:

  • using bootstrap

  • manually through the Recovery Connection function

  • completely manually

Bootstrapping a PNS host is one of the most simple methods. Bootstrapping is similar to running a wizard, that is, answering questions and allowing the wizard to carry out the necessary configurations. Alternatively, the connection can be established manually. This method may especially be needed in troubleshooting scenarios with the help of the Recovery Connection button. Hosts can be added on a completely manual way, by selecting a site and then clicking Add in the main workspace. For more details, see the Proxedo Network Security Suite 2 Reference Guide.

4.1. Procedure – Bootstrap a new host

Purpose: 

To bootsgrap a new host, complete the following steps.

Steps: 

  1. Select the desired site in the configuration tree.

  2. Click Bootstrap at the bottom of the screen. This will start the wizard.

    Bootstrapping a new host

    Figure 4.1. Bootstrapping a new host


    Note

    Alternatively, to register a host manually, click New next to Bootstrap.

  3. Select a host template.

    Selecting a host template

    Figure 4.2. Selecting a host template


    The default templates are the following. New templates can be created later.

    • Cluster minimal template

      This tenplate can be used to configure clustered solutions.

    • Host default template

      This template can be used to add the NTP, Application-level Gateway and Management Access components automatically to the Configuration tree under the name of the newly added host.

      For details, see:

    • Host minimal template

      This template can be used to add only the two default components: Management agents and Networking.

    It is recommended to select the Host default template because it already has some components preconfigured.

    Tip

    If you work with several PNS hosts it can be useful to create predefined templates, to save repetitive work. For details on creating templates manually, see Chapter 6, Managing network traffic with PNS.

  4. Enter the details of the new host.

    Parameters of the new host

    Figure 4.3. Parameters of the new host


    Host name: It is the name of your PNS firewall.

    Networking name: It is the name of the Networking component.

    Management agents name: It is the name of the Management agents component.

    Date and time name: It is the name of the Date and time component.

    NTP Server: Specify a time server that PNS synchronizes its system time with. Usually, but not necessarily it is an external time source. For the up-to-date list of publicly available time servers, see http://support.ntp.org/bin/view/Servers/WebHome. For more information on NTP, see Chapter 9, Native services.

    Management Access name: It is the name of the Management Access component.

    Application-level Gateway name: It is the name of the Application-level Gateway component.

  5. Enter the IP address and configuration port number of the Transfer agent.

    Entering the management IP address of the host

    Figure 4.4. Entering the management IP address of the host


    Before you start, discover which network interface and IP address is reachable from your network location. Firewalls almost always have more than one of these. Ensure that the IP address you type in is reachable from your location and that packets will find their way back from the firewall. In other words, make sure that all routing information is correctly configured.

    You can configure other interfaces of PNS to be reachable for configuration purposes later.

    1. Enter the IP address.

      Refer to the Firewall's installation documentation for the IP address information.

    2. Leave the Port field default (1311).

  6. Create a certificate for SSL communication establishment.

    Firewalls are administered from a protected, inside interface and while this method is highly recommended, it is not necessarily required. All the configuration traffic is encrypted, using SSL.

    The administrative connection is encrypted using SSL, which requires a certificate, especially the public key it contains. This certificate and the private key used for encryption/decryption are sent to the Management agent on the firewall node that uses it to encrypt the session key it generates. For more information on SSL communication establishment, see Chapter 11, Key and certificate management in PNS.

    Enter the parameters of the certificate and it will be generated automatically.

    Creating a certificate for SSL communication establishment

    Figure 4.5. Creating a certificate for SSL communication establishment


    1. To Create a new certificate, enter a name for the certificate in the Unique name field. Alternatively, to Use an existing certificate, browse for a certificate from Certificates. The following steps describe the details of creating a new certificate.

      Tip

      There are no particular requirements for the Unique name and Common Name fields other than trivial string length and restricted character issues. However, it is recommended to enter a name that will later — when there are more certificates in use in your system — uniquely and easily identify this certificate as the one used for establishing agent communication.

    2. In the Country field, enter the two-character country code of the country where PNS is located. For example, to refer to the United States, enter US.

    3. Optional step: In the State field, enter the state where PNS is located, if applicable. For example, California.

    4. In the Locality field, enter the name of the city where PNS is located. For example, New York.

    5. In the Organization field, enter the name of the company that owns PNS. For example, Example Inc..

    6. In the Org. Unit field, enter the department of the company that administers PNS. For example, IT Security.

    7. Enter a Common Name that describes you or your subdivision.

      Alternatively, you can use the default value: the name of your PNS firewall node.

    8. Configure the RSA algorithm. Select whether to use SHA-256 or SHA-512 Digest. Select the asymmetric key length from the Bits list.

      Note

      The U.S. National Institute for Standards and Technology (NIST) recommends 2048-bit keys for RSA.

    9. Configure the validity range of the certificate.

      To select the start date from the Valid after field, click .

      To configure the validity range, either select the end date from the Valid before field by clicking or enter the validity Length in days and press Enter.

  7. Enter the MS Agent CA password.

    Manually entered passwords protect private keys against possible unauthorized access. Even if an attacker has read access to your hosts, your private keys cannot be stolen (read). These passwords are used to encrypt the private keys and therefore they are never stored in unencrypted format at all. Certificates are issued by Certificate Authorities (CA) and it is actually the CA's private key that requires this protection. The certificate used by the Management agents are issued by the MS_Agent_CA. You have to enter the password for this CA that you defined for this purpose when installing MS service. See also Chapter 11, Key and certificate management in PNS.

    Note

    To generate a strong password, it is recommended to use a password generator.

    Tip

    Take detailed logs of the installation process, including the bootstraps where all these passwords are recorded.

    Entering MS Agent CA password

    Figure 4.6. Entering MS Agent CA password


  8. Enter the One-Time Password.

    The one-time password is the one that you have entered during the installation of vms-transfer-agent on the PNS host. It is a one-time operation: to establish an SSL channel between the Management agents of PNS and the MS host, certificates are required. There are no certificates to use, therefore you have to provide a certificate for the vms-transfer-agent on PNS, which can be used for communication channel buildup purposes. This password is used to establish a preliminary encrypted communication channel between PNS and the MS host, where the certificate can be sent. All communication among the parties is performed using SSL.

    Entering the One-Time Password

    Figure 4.7. Entering the One-Time Password


  9. Click the final OK button if all password phases have been successful to build up the connection.

    The displayed logs provide information about the steps the wizard takes in the background. To save the output for later analysis, if needed, (either by you or support personnel), click Save.

    Note

    If anything goes wrong, the wizard takes you back to the window you made a mistake in, so that you can correct it.

    After the bootstrap process has finished successfully, the new host is ready to be configured.

4.2. Reconnecting to a host

When you start up MC and select a host in the Configuration tree, the connection with the host is automatically established. If, for some reason it breaks, you have to reconnect the host manually.

4.2.1. Procedure – Reconnecting MS to a host

  1. Navigate to Management > Connections....

    Managing connections manually

    Figure 4.8. Managing connections manually


    This window accurately shows that it is not the PNS host that directly communicates with MS, but the Management agent installed on it.

    Agents are responsible for reporting firewall configuration and related information to the MS and are also responsible for accepting and executing configuration commands. Communication between the Transfer Agent and MS uses TCP port 1311. The Transfer Agent must be installed on all firewall nodes to be managed with MS. By default, MS establishes the communication channel with the agents, but the agents can also be configured to start the communication if required.

  2. Connect and/or disconnect the appropriate agents with the corresponding buttons.

Chapter 5. Networking, routing, and name resolution

MS is a complex central management facility for PNS firewalls. Besides firewall-centric configuration settings, such as firewall policies, packet filter rules, it allows for the configuration of several basic parts of the operating system. In fact, one of the design goals of MS was to eliminate the need for command-line configuration of the operating system and PNS as much as possible. Therefore tools are provided to perform basic, operating system-level configuration tasks.

The Networking component that is present by default for each host in the Configuration tree serves this purpose by providing access to all the relevant network-related configuration areas of the host's operating system. The possible settings in the Networking component are mostly related to ordinary network configuration issues and there are hardly any variables directly related to firewalling functions.

The main window of the Networking component is divided into the following four tabs.

Tabs in the Networking component

Figure 5.1. Tabs in the Networking component


Warning
It is recommended for the user not to create any files with the '00-MS' prefix to the Networking component, that is to the /etc/systemd/network/ because the MS GUI might handle these files and will probably either modify or delete them.

5.1. Configuring networking interfaces

The configuration of network interfaces can be performed on the Interfaces tab of the Networking MC component. These tasks fall into the following categories.

General and special interface configuration features are available for every host managed from MS, while spoof protection is intended for PNS gateways or other hosts that have an active packet filter installed.

5.1.1. General interface configuration

The Interfaces tab of the Networking MC component lists the network interfaces available on the host, along with their type, IP addresses, and connected zones.

Network interface configuration

Figure 5.2. Network interface configuration


Tip

If you do not see one or more physical interfaces of your host listed here, it is most likely because they were not configured before bootstrapping took place. The bootstrapping process not only establishes the connection between MS and the host (Management agents on it), but it also queries host configuration, and inserts this information into the MS database. When selecting a host entry in MC, the information is read from the MS database, and not from the host directly. Therefore, MC does not detect parameters that were unavailable for MS during bootstrapping.

To correct this situation, define the missing interface(s): click New and configure them as required.

5.1.1.1. Procedure – Configuring a new interface

Purpose: 

To define a new interface, complete the following steps.

Steps: 

  1. Navigate to Networking > Interfaces tab.

  2. Click New.

  3. In the Name field, enter a name for the interface (for example, eth0).

  4. Select the Type of the interface.

    Defining a new interface

    Figure 5.3. Defining a new interface


  5. Optional step: Enter a description, if required. To enter a longer description, click .

  6. Click OK.

  7. Configure the parameters of the interface below the table: enter the IP Address, Netmask, Gateway Address, and other data as required. The list of type-specific parameters depend on the type of interface you are configuring.

    For static interfaces, that is, regular Ethernet interfaces enter the Netmask parameter using the CIDR notation.

    Warning

    As with all firewalls, you can specify only one gateway address in the network configuration and only for a single interface. The gateway box for all other interfaces must be empty.

    Configuring a new interface

    Figure 5.4. Configuring a new interface


    Note

    If the configuration information you enter in MC is not the same as the current settings on the host, the settings of the host are overwritten during the next Upload action.

  8. Check in (or leave checked in) the Ignore carrier loss parameter if you wish to enable the interface to retain its configuration even in case there is temporarily no carrier for it.

    This value is checked in by default. It is strongly recommended to keep this value checked in.

If you change interface settings, in order to activate the changes, you have to restart the modified network interface. Additionally, it may be required to temporarily stop an interface for security or maintenance reasons with the Actions button under the network interface listing.

Warning

Restarting the interface might terminate all ongoing connections of the interface.

Note

The interfaces are controlled individually.

5.1.1.2. Dynamic interfaces

Dynamic interfaces are interfaces that are either created dynamically, or obtain IP configuration information dynamically from a designated server (for example, dhcp, bootp, ppp). As their IP configuration is not known when PNS boots up (and can be different at each boot sequence), the services using these interfaces cannot include the IP address of the interface in the firewall rules related to the service. To overcome this problem, PNS can bind to interfaces instead of IP addresses. Dynamic interfaces are referenced by their name in the firewall rules. The operating system automatically notifies the running PNS instances when the IP configuration information of the interface is received from the server. IP address changes are also automatically handled within PNS. For more information on configuring firewall rules, see Section 6.5, Configuring firewall rules.

Example 5.1. Referencing static and dynamic interfaces in firewall rules

Dynamic interfaces can be used in firewall rules the same way as static interfaces. The following rule references a static interface:

Rule(proto=6,
    dst_iface='eth0',
    service='test'
    )

The following rule references a dynamic interface called dyn:

Rule(proto=6,
    dst_iface='dyn',
    service='test'
    )

5.1.2. Configuring virtual networks and alias interfaces

In some cases it can be useful to fine-tune the network for special purposes. For example for Virtual Local Area Network (VLAN) technology: many organizations use it for security and network traffic separation purposes. VLANs are logically separated components of physical networks. Logical separation means that although they are on the same physical network (otherwise known as broadcast domain) hosts on separate VLANs cannot communicate with each other unless a router is set up that provides the interconnection. Routing functions for VLAN, and VLAN creation in general, are typically performed by Layer 3 Ethernet switches. Provided that VLAN-capable network cards are installed in the machine, PNS fully supports VLANs and MS provides a control for configuring it.

VLAN interfaces are named in the following manner:

ethx.n where

x

is the number of the physical interface

n

is the ID of the VLAN. The ID of the VLAN is usually a number (for example, 0 for the first VLAN of the interface, 1 for the second, and so on).

Note

If you define an interface as a VLAN interface, it cannot operate as a real, physical interface at the same time.

For example, the eth1.12 VLAN interface is the 12th VLAN interface of the eth1 physical network interface. If you define a VLAN for eth1, you cannot use eth1 as a physical interface.

5.1.2.1. Procedure – Creating a VLAN interface

Purpose: 

To create a VLAN interface, complete the following steps.

Steps: 

  1. Every VLAN interface must be connected to a physical interface. If not already configured, configure the physical interface that will be used as the VLAN interface. See Section 5.1.1, General interface configuration for details.

    Warning

    If you define an interface as a VLAN interface, it cannot operate as a real, physical interface at the same time.

  2. To create a new interface, click New.

  3. Set the Type of the interface. The type of the VLAN interface and that of the physical interface can be different.

  4. Enter a name for the VLAN interface and click OK. The name must include the name of the physical interface, the period (.) character, and a number that identifies the VLAN interface (because a physical interface can have several VLAN interfaces), for example, eth1.0.

    MC creates the new interface and automatically selects the VLAN option and the sets the parent interface of the VLAN.

  5. Configure other options of the interface (for example, connected zones) as needed.

  6. To activate the changes, click Commit and Upload. Then select the physical interface of the VLAN, click Control service, and Restart the interface.

    Warning

    Restarting the interface might terminate all ongoing connections of the interface.

Configuring VLAN and alias interfaces

Figure 5.5. Configuring VLAN and alias interfaces


Using alias interfaces allows you to configure multiple IP addresses to a physical device. Alias interfaces are named in the following manner:

ethx:n where

ethx

is the name of the corresponding physical or VLAN interface.

n

is the ID of the alias interface. The ID is usually a number (for example, 0 for the first alias of the interface, 1 for the second, and so on), but it can be a more informative name as well.

An alias can be defined for existing physical and VLAN interfaces.

5.1.2.2. Procedure – Creating an alias interface

Purpose: 

To create an alias interface, complete the following steps.

Steps: 

  1. Every alias interface must be connected to a physical or a VLAN interface. If it is not already configured, configure this interface.

  2. To create a new interface, click New.

  3. Set the Type of the interface. The type of the alias interface and that of the physical interface can be different.

  4. Enter a name for the alias interface and click OK. The name must include the name of the physical or VLAN interface, the colon (:) character, and the number or the name that identifies the alias interface (because an interface can have several alias interfaces). For example, eth1:0.

    MC creates the new interface and automatically selects the alias option and sets the parent interface of the alias.

  5. Configure other options of the interface (for example, connected zones) as needed.

  6. To activate the changes, click Commit and Upload. Then select the parent interface of the alias, click Control service, and Restart the interface.

    Warning

    Restarting the interface might terminate all ongoing connections of the interface.

5.1.3. Procedure – Configuring bond interfaces

Purpose: 

To create a bond interface, complete the following steps. The interfaces used to create the bond interface must be already configured. For details on configuring network interfaces, see Procedure 5.1.1.1, Configuring a new interface.

Steps: 

  1. Navigate to the host, and select the Networking MC component.

  2. To create a new interface, select Interfaces > Network interface configuration > New. The New interface dialog is displayed.

  3. Enter a name for the interface (for example, bond0) and set its type to static.

  4. Select Type-specific options > New.

  5. Select bond_slaves and enter the name of the interfaces to bond into the Attributes field. Use a single whitespace to separate the name of the interfaces (for example, eth0 eth1).

  6. Optional step: Select Type-specific options > New to set other bond-specific options as needed (for example, bond_mode). If an option is not listed, type its name into the Option field and its value to the Attributes field.

  7. Configure the other parameters of the interface (for example, address, netmask, and so on) as needed.

  8. To activate the changes, click Commit and Upload.

5.1.4. Procedure – Configuring bridge interfaces

Purpose: 

To create a bridge interface, complete the following steps. The interfaces used to create the bridge interface must be already configured. For details on configuring network interfaces, see Procedure 5.1.1.1, Configuring a new interface.

Steps: 

  1. Navigate to the host, and select the Networking MC component.

  2. To create a new interface, select Interfaces > Network interface configuration > New. The New interface dialog is displayed.

  3. Enter a name for the interface (for example, bridge0) and set its type to static.

  4. Select Type-specific options > New.

  5. Select the bridge_ports option, and enter the name of the interfaces to bridge into the Attributes field. Use a single whitespace to separate the name of the interfaces (for example, eth0 eth1). To bridge every available interfaces, enter all.

  6. Optional step: Select Type-specific options > New to set other bridge-specific options as needed (for example, bridge_stp). If an option is not listed, just type its name into the Option field and its value to the Attributes field. For a list of available options, see the bridge-utils-interfaces manual page.

  7. Configure the other parameters of the interface (for example, address, netmask, and so on) as needed.

  8. To activate the changes, click Commit and Upload.

5.1.5. Enabling spoof protection

Spoof protection means that the packet filter module of a firewall checks to ensure that packets arriving on an interface have source IP addresses that are legal in networks reachable through that given interface and accepts only those packages that match this criterion.

For example, if eth0 connects to the Intranet (10.0.0./8) and it is spoof-protected, the firewall does not accept datagrams on this interface with source IP addresses other than the 10.0.0.0/8 range. It does not accept datagrams with source IP address from the 10.0.0.0/8 range on interfaces other than eth0 either.

5.1.5.1. Procedure – Configuring spoof protection

Steps: 

  1. Select the Networking interface and next to the Connects field, click .

  2. Select the zones the configured interface is connected to either directly or indirectly through routers.

    Zone selection for the Connects control

    Figure 5.6. Zone selection for the Connects control


    Note

    The Zone tree in the dialog window displays organized IP addresses, starting from the most generic (0.0.0.0/0) to the most specific. There is an implicit inheritance in the Connects specification: if the 10.0.0.0/8 address range is specified to connect to eth1, all more specific subnets of this address range (10.0.0.0/9, /10, .../32) connect to eth1 also, unless a more specific binding is explicitly specified.

  3. Select Spoof protection.

    Note

    The Management Access ruleset must be regenerated after modifying any interface settings. It is not done automatically.

For further details on zones, see Section 6.2, Zones.

5.1.6. Interface options and activation scripts

The bottom part of the Interfaces tab can be used to specify activation scripts and various other parameters of the network interfaces. The following options can be configured:

5.1.6.1. Configuring interface activation scripts

Interface activation scripts can be useful for scenarios where special procedures are required to initialize networking.

For example, changing the Media Access Control (MAC) address of a network card before bringing it up can be done with a pre-up script. Such scripts should also be used for configuring bridge interfaces.

The following types of activation scripts can be set:

  1. post-up scripts are executed after the interface is activated.

  2. post-down scripts are executed after the interface is deactivated.

5.1.6.1.1. Procedure – Creating interface activation scripts

Purpose: 

To create an interface activation script for an interface, complete the following steps.

Steps: 

  1. Navigate to Networking > Interfaces and select the interface to configure.

  2. To add a new option to the interface, click New.

    Configuring interface options

    Figure 5.7. Configuring interface options


  3. From the Option field, select the type of the script (post-up, post-down).

  4. Enter the command to be executed to the Script field under Attributes. Use full path name, for example, /sbin/ifdown.

    Defining interface scripts

    Figure 5.8. Defining interface scripts


  5. To execute multiple commands, repeat Steps 2-4.

  6. To set the order of commands, use the arrow buttons below the list of options.

    Tip

    If you have to use complex scripts, create a script file on the PNS host using a text editor, and add an option to run it when needed.

5.1.6.2. Interface groups

To simplify the management of services that are available from multiple zones and interfaces, interfaces can be grouped. Interfaces belonging to an interface group can be controlled together: the ifup and ifdown commands support interface groups too. Firewall rules can accept connections on interface groups too. For details on zones, services, and firewall rules, see Chapter 6, Managing network traffic with PNS.

5.1.6.2.1. Procedure – Creating interface groups

Purpose: 

To assign interfaces to an interface group, complete the following steps.

Steps: 

  1. Navigate to the Networking component and select the Interfaces tab.

  2. From the list of interfaces, select the interface to be added to the group.

  3. Under the list of options, click New.

    Creating new interface groups

    Figure 5.9. Creating new interface groups


  4. From the Option field, select group.

    Creating interface groups

    Figure 5.10. Creating interface groups


  5. Groups are identified by a number between 1–255. To assign the interface to a group, enter a number into the Attribute field.

  6. Click Ok.

  7. Repeat Steps 2–5 to add other interfaces to the group.

  8. Commit and upload your changes. To activate the changes, restart the interface.

5.1.6.3. Other interface options

The following miscellaneous options can be configured for a network interface.

Note

It is rarely required to use these options in common scenarios. Modify these settings only if you are completely aware of all the necessary details.

  • hwaddress: It sets the MAC address of the interface.

  • mtu: It sets the Maximum Transfer Unit (MTU) of the interface.

  • keep-configuration: This parameter prevents either the manually configured IPs and routes or the IPS and routes set by other processes (mostly keepalived) from being dropped at the restart of the Networking component. When this option is set to value static, the static addresses and routes on the actual interface are not dropped at the restart of the Networking component. Also, if this parameter is set, then after any change on these interfaces, the old values will not be removed at the restart of the Networking component, but new values will be added (for example, IP, subnet). Temporarily turning the keep-configuration parameter to no and restarting the node is not advised, because the networking restart will remove all settings added by other sources too. It is recommended to reboot the node after these values have been changed, or to configure these changes manually, and skip restart.

5.1.6.3.1. Procedure – Configuring interface parameters

To configure an interface parameter, complete the following steps.

  1. Navigate to the Networking component and select the Interfaces tab. Select the interface to be configured.

  2. To add a new option to the interface, click New.

  3. From the Option field, select the parameter that you want to configure.

  4. Enter the value of the parameter into the Attributes field.

  5. Commit and upload your changes. To activate the changes, restart the interface.

5.1.7. Interface status and statistics

The state of the configured interfaces is indicated by coloured leds in front of the name of the interface:

  • Green bar: the interface is up (on the host or on all cluster nodes).

    In case of Keepalived interface: the interface is up only in one cluster node.

  • Yellow bar: the interface is up on some nodes, but not on all of them.

    In case of Keepalived interface: the interface is up on multiple nodes, but not on all of them.

  • Red bar: the interface is down on all nodes.

    In case of Keepalived interface: the interface is up on all nodes.

  • Blue bar: Unknown.

The state of the interface is automatically updated periodically. The frequency of the update can be configured in Edit > Preferences > General > Program status.

Status update can also be requested manually from the local menu of the interface:

  1. Select the interface.

  2. Right-click on the interface and select Refresh State.

    Note

    Note that the interfaces with IPv6 addresses are always marked with blue bar.

Hovering the mouse over an interface displays a tooltip with status information and detailed statistics about the interface. The following status information is displayed:

  • State: It is the status of the interface. The possible values are: Up, Down, Unknown.

  • flags: These are the flags applicable to the interface. The possible values are: UP, BROADCAST, MULTICAST, PROMISC, NOARP, ALLMULTI, LOOPBACK.

  • mtu: Maximum Transmission Unit (MTU) is the maximum size of a packet (in bytes) allowed to be sent from the interface.

  • qdisc: It is the queuing discipline. For details, see the Traffic Control manual pages (man tc).

The statistics about the traffic handled by the interface is divided into Received and Sent sections, relevant for the received/sent packets, respectively.

  • Bytes: It is the amount of data received.

  • Packets: It is the number of packets received.

  • Errors: It is the number of errors encountered.

  • Dropped: It is the number of dropped packets.

  • Overruns: It is the number of overruns. Overruns usually occur when packets come in faster than the kernel can service the last interrupt.

  • Mcast: It is the number of multicast packets received.

  • Bytes: It is the amount of data sent.

  • Packets: It is the number of packets sent.

  • Errors: It is the number of errors encountered.

  • Dropped: It is the number of dropped packets.

  • Carrier: It is the number of carrier losses detected by the device driver. Carrier losses are usually the sign of physical errors on the network.

  • Collsns (Collisions): It is the number of collisions encountered.

5.2. Managing name resolution

The Naming tab

Figure 5.11. The Naming tab


Variables on the Naming /etc/hosts and /etc/networks files. Their use is mostly optional, name resolution is faster if the most important name-IP address pairs are listed in /etc/hosts. A correctly configured resolver can provide the same service.

Note

If you intend to use the Postfix native proxy on the PNS host, you exceptionally have to supply the Mailname parameter.

5.3. Managing client-side name resolution

The Resolver tab

Figure 5.12. The Resolver tab


The client-side of DNS name resolution information can be configured on the Resolver tab.

Note

The Bind native proxy of PNS cannot be configured here. For information, see Chapter 9, Native services.

In DNS terminology, the client initiating a name resolution query is called a resolver. For details about configuring a resolver correctly, refer to a good DNS documentation, see Appendix B, Further readings.

5.3.1. Procedure – Configure name resolution

  1. List the nameservers to be used by your host in the right pane.

  2. Set a priority order among the nameservers. The first one on the list is queried first.

  3. Set up domain search order in the left pane.

    Use the buttons with triangles on the right.

    This information is used when you issue a name query for a hostname but without supplying the domain name parts: for example, telnet myserver. In this case, the resolver automatically tries to append domain substitutes to the hostname in the order you specify, before sending queries to nameservers.

    Domain search order

    Figure 5.13. Domain search order


    In the example above, this would be example.com and then, if the query is unsuccessful myserver.example.com.

  4. OPTION

    Define the preferred interface in the Sortlist.

    The sortlist directive specifies the preferred interface you wish to communicate on, when, as a result of a query, you receive more than one IP addresses for a given host. The value of Sortlist can be a network IP address or a host IP address/subnet mask pair, where the subnet mask is in the classic dotted decimal format and not in CIDR notation.

    Tip

    The optimization using the Sortlist might be useful for firewalls with many interfaces installed, or in the following special network setup.

    The firewall is connected to the Internet with two interfaces: one for a broadband, primary connection and another, lower-bandwidth backup connection through a different Internet Service Provider (ISP). If you want to reach a server on the Internet, the DNS query returns two IP addresses for the same server. From its routing table, your firewall deduces that both IP addresses are reachable, but by default it uses the IP address that was listed first in the DNS response, even if that IP address is reachable through the — slower — backup line. To avoid this situation, you can explicitly tell your resolver with the Sortlist feature that whenever possible, it must prefer the interface that connects to the higher-bandwidth primary line.

    Note that the Sortlist feature provides just a preference and not an exclusive setting: if the targeted server cannot be reached via the interface designated by the sortlist parameter, the other interface(s) and IP addresses are tried.

    Sortlist setting

    Figure 5.14. Sortlist setting


5.4. The routing editor

If a packet has to be delivered to a remote network, PNS consults the routing table to determine the path it should be sent to. If there is no information in the routing table, then the packet is sent to the default gateway. For maintaining and managing the routing table PNS offers a simple, yet effective user interface on the Routing tab of the Networking MC component. It can be used to define static routes to specific hosts or networks through a selected interface.

The Routing tab

Figure 5.15. The Routing tab


The Routing tab contains the following elements:

  • a list of routing table entries in the middle of the panel

  • a filter bar on the top for searching and filtering the entries

  • a control bar on the bottom for managing the entries of the list and for activating the modifications of the routing table on the PNS firewall hosts

5.4.1. Routes

A route has the following parameters:

  • Address: It is the IP address of the network.

  • Netmask: It is the Netmask of the network.

    Note

    The IP address and netmask parameters are displayed in the Network column of the routing entries list.

  • Gateway: It is the IP address of the gateway to be used for transmitting the packages.

  • Interface: It is the Ethernet interface to be used for transmitting the packages. This parameter must be selected from the combobox listing the configured interfaces.

  • Metric: It is an optional parameter to define a metric for the route. The metric is an integer from the 0 – 32766 range.

  • Description: These notes describe what this entry is used for.

The above parameters are interpreted the following way: messages sent to the Address/Netmask network should be delivered through Gateway using Interface.

New entries into the routing table can be added by clicking the New button of the control bar; existing entries can be modified by clicking Edit. The updated routing tables take effect only after the new configuration is uploaded to the host, and the routing tables are reloaded using the Actions/Reload buttons of the control bar.

Adding new routing entries

Figure 5.16. Adding new routing entries


5.4.2. Sorting, filtering, and disabling routes

The list of routing table entries can be sorted by all of its parameters (network, gateway, and so on) by clicking on the header of the respective column. By default, the list is displayed according to the general listing policy of the routing tables, that is, the networks connecting via a gateway are listed after the directly connected networks.

The list can be filtered using the filter bar above the list.

5.4.2.1. Procedure – Filtering routes

  1. Type the search pattern into the textbox.

  2. Specify the elements to be searched using the combobox on the left.

  3. Click Filter.

    The list will only display the routes matching the search criteria.

  4. Click Clear to return to the full list.

The Filter bar

Figure 5.17. The Filter bar


Routes can be temporarily disabled by right-clicking the selected route and selecting Disable from the appearing local menu.

Note

The route becomes disabled only after the routing table is reloaded.

5.4.3. Managing the routing tables locally

When the PNS host is administered via a terminal, the routes have to be entered manually by editing the /etc/network/static-routes configuration file. Each line of this file corresponds to a route, and has the following format:

interface_name to address/netmask through gateway metric [metric]

For the modifications to take effect, the routing tables have to be reloaded by issuing the /usr/lib/vms-transfer-agent/MS-routing params="reload" command.

Chapter 6. Managing network traffic with PNS

This chapter describes how to allow traffic to pass the PNS firewall. It gives detailed explanation of the many features and parameters of PNS.

6.1. Understanding Application-level Gateway policies

This section provides an overview of how Application-level Gateway handles incoming connections, and the task and purpose of the different PNS components.

Application-level Gateway firewall rules permit and examine connections between the source and the destination of the connection. When a client tries to connect a server, Application-level Gateway receives the connection request and finds a firewall rule that matches the parameters of the connection request based on the client's address, the target port, the server's address, and other parameters of the connection. The rule selects a service to handle the connection. The service determines what happens with the connection, including the following:

  • the Transport-layer protocol permitted in the traffic, for example, TCP or UDP

  • the service started by the firewall rule.

    This also determines the application-level protocol permitted in the traffic. Application-level Gateway uses proxy classes to verify the type of traffic in the connection, and to ensure that the traffic conforms to the requirements of the protocol, for example, downloading a web page must conform to the HTTP standards.

  • the address of the destination server

    Application-level Gateway determines the IP address of the destination server using a router. Routers can also modify the target address if needed.

  • the content of the traffic

    Application-level Gateway can modify protocol elements, and perform Content Filtering. See Chapter 14, Virus and content filtering using CF for details.

  • how to connect to the server

    For non-transparent connections, Application-level Gateway can connect to a backup server if the original is unreachable, or perform loadbalancing between server clusters.

  • who can access the service

    Application-level Gateway can authenticate and authorize the client to verify the client's identity and privileges. See Chapter 15, Connection authentication and authorization for details.

The operations and policies configured in the service definition are performed by a Application-level Gateway instance.

6.2. Zones

Zones describe and map the networking environment on IP level. IP addresses are grouped into zones; the access policies of PNS operate on these zones. Zones specify segments of the network from which traffic can be received (source), or sent to (destination), through the firewall. Zones in PNS can contain:

  • IP networks,

  • subnets,

  • individual IP addresses, and

  • hostnames.

Zone management is handled by the zone-helper daemon (vela-zone-helper). vela-zone-helper is responsible for maintaining zone address information in traffic classification subsystem and also for updating dynamic address information in hostname-based zones.

The actual implementation of a zone hierarchy depends on the network environment, the placement and the role of PNS firewalls, the security policy, and so on.

The Internet zone which covers all possible IP addresses is defined on every site by default. If an IP address is not included in any user-defined zones, it belongs to the Internet zone. PNS policies can permit traffic between two or more zones, in this case another zone — e.g., the intranet — should be created. Usually a special zone called demilitarized zone (DMZ) is defined for servers available from the Internet.

Zones in PNS can have a hierarchy, where a zone may contains many subzones, and each one of them could have further subzones nested within. The created zone structure can be represented in the form of a tree hierarchy. This hierarchy is purely administrative and independent from the IP addresses defined in the zones themselves: for example, a zone that contains the 192.168.7.0/24 subnet can have a subzone with IP addresses from the 10.0.0.0/8 range.

A network can belong to a single zone only, otherwise the position of IP addresses in the affected network would be ambiguous.

The zone hierarchy is independent from the subnetting practices of the company or the physical layout of the network, and can follow arbitrary logic. The zone hierarchy applies to every host of a site.

Note

Subnets can be used directly in PNS configurations, it is not necessary to include them in a zone.

Note

It is recommended to follow the logic of the network implementation when defining zones, because this approach leads to the most flexible firewall administration. Plan and document the zone hierarchy thoroughly and keep it up-to-date. An effective and usable zone topology is essential for successful PNS administration.

6.2.1. Managing zones with MC

By default, MC defines a zone called internet on every site. The internet contains the 0.0.0.0 and the ::0 networks with the 0 subnet mask. This zone means any network: every IP address not belonging to any other zone belongs to the internet zone.

Note

PNS uses the CIDR notation for subnetting.

Zones

Figure 6.1. Zones


The internet zone is typically used in firewall rules where one side of the connection cannot be defined more exactly.

Example 6.1. Using the Internet zone

The Internet zone identifies all external networks. To allow the internal users to visit all web pages, simply set the destination zone of the HTTP service to Internet. For details on creating services, see Section 6.4, Application-level Gateway services.

Zones are managed on the Site component in MC. The left side of the main workspace displays the zones defined on the site and their descriptions. IP networks that belong to the selected zone are displayed on the right side of the workspace.

Note

The Application-level Gateway MC component has a shortcut in its icon bar to the zone editor. The zone hierarchy applies to all firewalls of the site, therefore carefully consider every modification and its possible side-effects.

Use the control buttons to create, delete, or edit the zone definitions and the IP networks. Use the arrow icons to organize the zones into a hierarchy (see Section 6.2.3, Zone hierarchies for details).

Example 6.2. Subnetting

Suppose you have the following IP address range to put into a zone: 1.2.50.01.2.70.255. You can either define 21 IP subnets with /24 mask or you can define six subnets in the following manner: 1.2.50.0/23, 1.2.52.0/22, 1.2.56.0/21, 1.2.64.0/22, 1.2.68.0/23, 1.2.70.0/24. Whether you have a switched/routed network or you actually use /24 subnets is irrelevant from the zone's (PNS's) point of view. As long as it encounters an IP address from the range 1.2.50.01.2.70.255, it will consider it a member of the given zone.

Furthermore, if you define Zone A with the IP network 10.0.0.0/8 and Zone B consisting of the network 10.0.1.0/24 and the machine, Computer C with the IP address of 10.0.1.100/32, from an IP addressing point of view, Computer C belongs to both subnets, but the PNS rule applied in this and similar cases is, that the machine is always considered to belong to the more specific network (and thus the zone), as also specified by the CIDR method. In this example it is Zone B.

6.2.2. Procedure – Creating new zones

To create a new zone on the site, complete the following steps.

  1. Select the site from the configuration tree and click New.

    Creating a new zone

    Figure 6.2. Creating a new zone


  2. Enter a name for the zone in the displayed window.

    Tip

    Use descriptive names and a consistent naming convention. Zone names may refer to the physical location of the network or the department using the zone (for example, building_B, or marketing).

  3. Creating a new network in a zone

    Figure 6.3. Creating a new network in a zone


    To add an IP network to the zone, click New in the Networks pane.

    • To add a network or an IP address to the zone, select Subnetwork, fill the Network and Netmask fields, then click Ok.

    • To add a hostname to the zone, select Hostname, enter the hostname into the Address field, then click Ok. For details on using hostnames in zones, see Section 6.2.4, Using hostnames in zones.

    Repeat this step to add other networks to the zone.

    Note

    The new zone has effect only if used in a firewall rule definition.

    Adding networks to a zone

    Figure 6.4. Adding networks to a zone


6.2.3. Zone hierarchies

Zones can be organized into a tree, much like the directories of a file system. Define a topmost zone and with many subzones, each for administratively different parts of your networks. A zone and its subzone have parent-child relationship: child zones automatically inherit all properties and settings of their parents. For example, Zone A is the parent zone of Zone B, and all clients in Zone A may browse the web through HTTP. Zone B inherits this setting, so all clients of Zone B have unrestricted HTTP access.

To stop a zone from inheriting the properties of the parent zone, use a DenyService. For details on DenyServices, see Procedure 6.4.3, Creating a new DenyService.

Zones can be reorganized as needed.

Note

Changing parent-child relations also changes the inheritance chain — which might cause unexpected results on your firewall policies. Make sure to keep up-to-date documentation of your firewall configuration.

Zones, inheritance, and DenyServices

Figure 6.5. Zones, inheritance, and DenyServices


6.2.3.1. Procedure – Organizing zones into a hierarchy

To organize zones into a hierarchy, complete the following steps.

  1. Select the site from the configuration tree in MC.

  2. Move the child zone below its parent by using the up and down arrows located next to the Find button.

    Configuring zone hierarchy

    Figure 6.6. Configuring zone hierarchy


  3. Click the right arrow to make the selected zone the child of the zone above it.

  4. Commit the changes to the site.

    Note

    Zone definitions are site-wide, so modifications are effective on every firewall of the site.

    Committing changes to the site

    Figure 6.7. Committing changes to the site


  5. Select all hosts of the site and upload the configuration.

    This step is required because changes in the zone hierarchy must be uploaded to all firewall nodes.

  6. Select all hosts of the site, click the Control icon of the icon bar and reload the configuration.

To remove a child zone from the hierarchy, select the zone and click the left arrow.

6.2.4. Using hostnames in zones

You can directly use hostnames in zones. During startup, PNS automatically resolves these hostnames to /32 IP addresses, and updates them periodically to keep track of changes. When using hostnames in zones, note the following considerations and warnings:

  • Ensure that your Domain Name Server (DNS) is reliable and continuously available. If you cannot depend on your DNS to resolve the hostnames, do not use hostnames in zones.

  • Do not use zones that include hostnames to deny access, that is, do not use such zones in DenyServices. If PNS cannot resolve a hostname, it will omit the hostname from the zone. If the zone contains a single hostname only (because you want to use it to restrict access to a specific site), and PNS cannot resolve that hostname, then the zone will be considered empty, therefore it will never match any connection. If you have a firewall rule that is more permissive than the DenyService restricting access to a zone containing an unresolvable hostname only, then this more permissive rule will take effect, permitting traffic you want to block. (For example, you create a rule that permits HTTP traffic to the Internet, and a DenyService to block HTTP traffic to example.com hostname. If PNS cannot resolve the example.com hostname, then the broader, more permissive rule will permit traffic to the example.com site.)

  • vela-zone-helper, besides maintaining zone address information, also enables the filtering and blocking of any, possibly illegitimate, so called 'bogus' IP addresses.

    The filtering of the DNS-based zone IP addresses is activated by default in the configuration. The default level of filtering is set to the recommended value of 3, which indicates the following level of filtering:

    Filtering level Filtering
    0 No filtering takes place.
    1 Filtering of invalid host addresses takes place: unspecified addresses (0.0.0.0/32, ::/128).
    2 Filtering loopback address ranges takes place (127.0.0.0/8, ::1/128).
    3 Filtering of private address ranges (192.168.0.0/16, 10.0.0.0/8, 172.16. 0.0/12, fc00::/7), link-local address ranges (169.254.0.0/16, fe80::/10) and multicast ranges (224.0.0.0/4, ff00::/8) takes place.

    Table 6.1. Filtering levels


    If the level of filtering is required to be configured differently than the recommended value, it is possible to change it by command-line option, or in the vela-zone-helper configuration file with the help of the text editor. (Chapter 8, The Text editor plugin)

    For details see the vela-zone-helper and the vela-zone-helper configuration file manual pages in Appendix C, PNS manual pages in Proxedo Network Security Suite 2 Reference Guide.

  • If the hostname is resolved to an IP address that is explicitly used in another zone, then PNS will use the rule with the explicit IP address. For example, you have a zone that includes the example.com hostname, another zone that includes the 192.168.100.1/32 IP address, and you have two different rules that use these zones (Rule_1 uses the hostname, Rule_2 the explicit IP address). If the example.com hostname is resolved to the 192.168.100.1 IP address, PNS will use Rule_2 instead of Rule_1.

  • If more than one hostname is resolved to the same IP address, PNS ignores that specific IP address associated with more hostnames. Consequently, it is not possible to use a hostname in a zone if the server uses name-based virtual hosting.

  • Zones are global in PNS, and apply to all firewalls of the site, so carefully consider every modification of a zone, and its possible side-effects.

6.2.5. Finding zones

To find a zone or a subnet, select the site in the configuration tree and click the Find button.

Finding zones and subnets

Figure 6.8. Finding zones and subnets


You can search for the name of the zone, or for the IP network it contains. When searching for IP networks, only the most specific zone containing the searched IP is returned. If an IP address belongs to two different zones, the straightest match returns the most specific zone.

Example 6.3. Finding IP networks

Suppose there are three zones configured: Zone_A containing the 10.0.0.0/8 network, Zone_B containing the 10.0.0.0/16 network, and Zone_C containing the 10.0.0.25 IP address. Searching for the 10.0.44.0 network returns Zone_B, because that is the most specific zone matching the searched IP address. Similarly, searching for 10.0.0.25 returns only Zone_C.

This approach is used in the service definitions as well: when a client sends a connection request, PNS looks for the most specific zone containing the IP address of the client. Suppose that the clients in Zone_A are allowed to use HTTP. If a client with IP 10.0.0.50 (thus belonging to Zone_B) can only use HTTP if Zone_B is the child of Zone_A, or if a service definition explicitly permits Zone_B to use HTTP.

Tip

The Find tool is especially useful in large-scale deployments with complex zone and subnet structure.

6.2.6. Procedure – Exporting zones

Follow the steps to export the available zones and the zone-related pieces of information.

  1. Select the Export button to export the available zones and the zone-related pieces of information.

    There is no need to select the zone or zones for export. All information on all zones will be exported.

    Exporting zones and zone-related pieces of information

    Figure 6.9. Exporting zones and zone-related pieces of information


  2. Name the file to be exported, add the .csv extension to the file and save it.

    Note

    Make sure that the .csv extension is added to the exported file, otherwise the file will not be listed as an available file for any other activity, due to the missing file format.

    Naming the file to be exported

    Figure 6.10. Naming the file to be exported


    The following zone-related pieces of information are exported:

    • Name: It is the name of the actual zone.

    • Parent: It defines the parent for the actual zone.

    • Subnet: It identifies the subnets for the zone in a comma-separated list.

    • Host: It defines the hosts for the zone in a comma-separated list.

    • Description: It is possible to provide a description for the zone.

    The zone-related pieces of information are received in .csv format.

6.2.7. Procedure – Importing zones

Follow the steps and pay attention to the related considerations listed below to import zones.

  1. Select the Import button to import zones.

    Note

    Note: Only files with .csv extension are displayed for the activity.

    Selecting the zone for import

    Figure 6.11. Selecting the zone for import


  2. Select the file to be imported and click Open.

    Consider the followings for importing zones:

    • A zone is identified by two parameters, namely its name and its parent.

    • If the zone exists already, those zone-related host and subnet information will be imported from the file which do not belong to any zones in MC yet. That is, if the file contains hosts and subnets that exist already in MC and belong to that specific zone, the import process proceeds. The import process is aborted however, if the hosts and subnets belong already to another zone in MC.

    • If the zone to be imported does not exist yet, all zone parameters are imported.

    • Invalid zones are not imported and no warning is provided on that. However the import processes of invalid zones are aborted with a warning information on the abortion of the process.

      • If the zone selected for import has the same name as an existing zone in the MC, but the parent is different, the import process will be aborted as it is considered to be a different zone then.

        Zone with the same name but with different parent

        Figure 6.12. Zone with the same name but with different parent


      • Also, the zone is considered invalid, if its subnet is already in use by an existing zone. The process will be aborted.

        Zone with the same subnet

        Figure 6.13. Zone with the same subnet


6.2.8. Procedure – Deleting a zone or more zones simultaneously

Follow the steps below in order to delete a zone, or even multiple zones at the same time. The deletion of multiple zones takes place one by one.

  1. Select the site from the configuration tree.

  2. Select the zone or multiple zones for deletion and click Delete.

    Deleting a zone or multiple zones

    Figure 6.14. Deleting a zone or multiple zones


    If a zone is not referenced elsewhere in the configuration, it is deleted without any notification. If a zone, selected for deletion is referenced somewhere in the configuration, a warning pops up requesting confirmation on the deletion. If multiple zones are selected for deletion, the warning -if applicable- appears for each zone one by one:

    Warning on the deletion of a zone

    Figure 6.15. Warning on the deletion of a zone


    It is specifically listed in the dialogue window, where the actual zone is referenced in the configuration.

    It is possible at this stage not to delete the zone for which the warning appears. In this case, the administrator shall select No in the Warning on the deletion of a zone window, consequently the zone will not be deleted.

    The administrator can search for the zone references in the configuration, based on the list of references displayed in the dialogue window, and make changes to their configuration, if necessary. The references can, for example, be deleted, or the referenced zones can be exchanged with other zones. If these corrections are not completed in the configuration, and a zone, that is referenced in the configuration, is deleted, than the reference to this zone in the configuration will also be deleted together with deletion of the zone. In such cases it is crucial to pay attention to these configuration details, so that the configuration is not semantically ruined. For example, in case a source or destination list of a firewall rule is emptied by the deletion of its last component, that is a zone, then the configuration might end up with a rule that matches all, or on the contrary, we might create a rule that does not let pass anything.

6.3. Application-level Gateway instances

6.3.1. Understanding Application-level Gateway instances

Instances of Application-level Gateway are groups of services that can be controlled together. A PNS firewall can run multiple instances of the Application-level Gateway process. The main benefits of multiple firewall instances are the following:

  • Administration

    A typical firewall handles many types of traffic, many different protocols. These protocols might have different administrative requirements. Inbound traffic is usually handled differently from outbound traffic. For these reasons, using multiple firewall instances can make administration more transparent.

  • Availability

    If an error (for example, misconfiguration) occurs and the firewall instance stops responding, no traffic can pass the firewall. However, an error usually affects a single instance; the other ones are still functional, so only the traffic handled by the crashed instance stops. Instances can be controlled (started, restarted, stopped) individually from each other. This is important, because stopping or restarting an instance stops all traffic handled by the instance.

    Consider the following example. A firewall uses two instances: Instance_A for all e-mail related traffic (the POP3, IMAP, and the SMTP protocols) and Instance_B for everything else (HTTP, and so on). If Instance_A stops because of an error, or is stopped by the administrator, no e-mails can be sent or received. However, all other network traffic is working.

  • Performance

    Separate firewall instances have separate processes and separate sets of threads, significantly increasing the performance on multiprocessor systems (SMP or HyperThreading).

  • Logging

    Log settings are effective on the instance level; different instances can log in differently. For example, if logging level higher than the average is required for a type of traffic, it might worth to create an instance for this traffic and customize logging only for this instance.

Note

Although creating instances is beneficial, the number of instances that can run on a system is limited.

Each instance is a separate process and requires its own space in the system memory — quickly consuming the limited physical resources of the computer. More instances do not necessarily make configuration tasks easier, and complex configuration increases the chance of human errors.

Keep instance number relatively low unless you have a solid reason to use many instances.

Instances usually handle traffic based on the protocol used by the traffic, the direction of the traffic, or a special characteristic of the traffic (for example, requires authentication). It is common practice to define an instance for all inbound traffic that handles all services accessible from the Internet, and another one for all traffic that the clients of intranet are allowed to use. Consider creating a separate instance for:

  • special services, for example, mission-critical traffic;

  • traffic accessing critical locations, for example, your servers;

  • traffic that requires outband authentication.

6.3.2. Managing Application-level Gateway instances

To manage Application-level Gateway instances, navigate to the Instances tab of the Application-level Gateway MC component.

Managing instances

Figure 6.16. Managing instances


The following information is displayed for each instance:

  • Name: the name and state of the instance

    The colored led shows the state of the instance:

    • Green – Running

    • Red – Stopped

    • Blue – Unknown

      New instances that have not been started yet are in this state.

  • Number of processes: the number of CPU cores that the instance can maximally use

    Select Edit parameters > General > Number of processes to modify this setting. The default value is: 1.

  • Log verbosity: the log level set for the instance

  • Log settings: the log specifications of the instance

  • Description: a description of the instance, for example, the type of traffic it handles

Hovering the mouse over an instance displays a tooltip with detailed information, including the number of processes running in the instance, as well as the number of running threads for each process.

Use the button bar below the instances table to manage and configure the instances.

  • New: Create a new instance. On a freshly installed PNS, there are no instances — you have to create one first. See Procedure 6.3.3, Creating a new instance for details.

  • Delete: Remove an instance.

    Warning

    Deleting an instance makes the services handled by the instance unaccessible.

  • Edit parameters: Modify the name or parameters of the instance. See Section 6.3.5, Instance parameters — general for details. To modify the parameters of every instance, select > Default parameters.

  • Restart: Restart the instance. To Reload, Restart, Stop, or Start an instance, click Restart and select the desired action. These functions are needed after modifying the configuration of an instance. Log level sets the verbosity of logging. Active connections displays the connections currently handled by the instance (see Section 6.8, Monitoring active connections for details).

    Tip

    Use the Shift or the Control key to select and control multiple instances.

  • Arrow buttons: Move the instance up or down in the list. When PNS boots, the instances are started in the order they are listed.

6.3.3. Procedure – Creating a new instance

To create a new instance on a PNS firewall host, complete the following steps:

  1. Navigate to the Instances tab of the Application-level Gateway MC component on the PNS host.

    Note

    If the Application-level Gateway and the Management Access components are not available on the selected host, you have to add them first. See Procedure 3.2.1.3.1, Adding new configuration components to host for details.

  2. Click the New button located below the Instances table.

    Creating a new Application-level Gateway instance

    Figure 6.17. Creating a new Application-level Gateway instance


  3. Enter a name for the new instance.

    Note

    Use informative names containing information about the direction and type of the traffic handled by the instance, for example, intra_http or intra_pop3 referring to instances that handle HTTP and POP3 traffic coming from the intranet. Use direction names consistently, for example, include the source zone of the traffic.

  4. Describe the purpose of the instance in the Description field.

  5. To modify the parameters of the instance, uncheck the Use default parameters option and adjust the parameters as needed. For details on the available parameters, see Section 6.3.5, Instance parameters — general.

  6. Click OK.

6.3.4. Procedure – Configuring instances

To modify the parameters of an instance, complete the following steps.

  1. Navigate to the Instances tab of the Application-level Gateway MC component on the PNS host.

  2. Modify instances as described in either of the following options:

    • To modify the configuration of every instance on the site, select > Default parameters.

    • To modify the configuration of a single instance, select an instance and click Edit parameters. The Edit Instance window is displayed. Uncheck the Use default parameters option.

    Edit instance parameters

    Figure 6.18. Edit instance parameters


  3. Adjust the settings as needed, then click OK. Instance parameters are grouped into four tabs. See Section 6.3.5, Instance parameters — general for details on the available parameters.

  4. Commit and upload the changes.

  5. To activate the changes, select the Restart button or the Restart > Reload.

6.3.5. Instance parameters — general

To modify instance parameters, select Application-level Gateway > Instance > Edit parameters.

The following generic settings can be configured for those instance parameters that are Active/Enabled, consequently are visible in dark colour. Inactive/Disabled instance parameters are listed in light grey:

  • Instance: It is the name of the instance.

  • Stop instance before rename: When renaming an instance and this option is enabled, Application-level Gateway stops the instance, renames it, then starts the renamed instance.

  • Description: The user can provide a description of the instance here.

The General tab has the following parameters:

General instance parameters

Figure 6.19. General instance parameters


  • Thread limit: It is the number of threads the instance can start. Set Thread limit according to the anticipated number of concurrent connections. Most of the active client requests require their own thread. If the Thread limit is too low, the clients will experience delays and refused connection attempts.

  • Number of processes: It reflects the number of Application-level Gateway processes the instance can start. This setting determines the number of CPU cores that the instance can use. If our PNS host has many CPUs, increase this value for instances that have high traffic. Note that the Thread limit and the Thread stack limit parameters are applied separately for each process. For details on increasing the number of running processes, see Procedure 6.3.9, Increasing the number of running processes.

    For every process, Application-level Gateway uses a certain amount of memory from the stack. At most, a process uses the default value of the stack size of the host (which is currently 8 MB for Ubuntu 22.04 LTS). Application-level Gateway uses this memory only when it is actually needed by the thread, it is not allocated in advance.

  • Automatically restart abnormally terminated instances: If enabled, Application-level Gateway automatically restarts instances that crash for some reasons.

  • Enable core dumps: If enabled, PNS automatically creates core dumps when a Application-level Gateway instance crashes for some reasons. Core dumps are special log files and are needed for troubleshooting.

    For more details on core dumps see Section 10.11, Managing core dump files.

6.3.6. Instance parameters — logging

Instance parameters can be set on the tabs of the Edit Instance Parameters window. The Logging tab has the following parameters:

Instance parameters — logging

Figure 6.20. Instance parameters — logging


  • Verbosity level: It is the general verbosity of the instance. It ranges from 0 to 9; the higher value means more detailed logging.

    Note

    Setting a high verbosity level (above 6) can dramatically decrease the performance. On level 9 Application-level Gateway logs the entire passing traffic.

    Tip

    The default verbosity level is 3, which logs every connection, error and violation without many details.

    Level 4 to 6 include protocol-specific information as well.

    Levels 7 to 9 are recommended only for troubleshooting and debugging purposes.

  • Message filter expression: It sets the verbosity level on a per-category basis. Each log message has an assigned multi-level category, where levels are separated by a dot. For example, HTTP requests are logged under http.request. A log specification consists of a wildcard matching log category, a colon, and a number specifying the verbosity level of that given category. Separate log specifications with a comma. Categories match from left to right. For example, http.*:5,core:3. The last matching entry will be used as the verbosity of the given category. If no match is found the default verbosity is used.

  • Include message tags: It prepends a log category and a log level to each message.

  • Escape binary characters in log files: It replaces non-printable characters with XX to avoid binary log files, also characters with codes lower than 0x20 or higher than 0x7F.

Note

Customized logging can be very useful, but should be used with caution. Too many log specifications can decrease the overall performance of Application-level Gateway.

Example 6.4. Customized logging for HTTP accounting

The HTTP proxy logs accounting information into the accounting category. Requested URLs are logged on level 4. To log the URLs, but leave the general verbosity level on 3, add a new log specification to the Message filter expression list: http.accounting:4.

6.3.7. Instance parameters — Rights

Instance parameters can be set on the tabs of the Edit Instance Parameters window. The Rights tab has the following parameters:

Instance parameters — Rights

Figure 6.21. Instance parameters — Rights


Warning

Modify these settings only if you are completely aware of the necessary details, because bad configuration of rights can prevent Application-level Gateway from starting.

  • User: Run Application-level Gateway as this user. By default, Application-level Gateway runs as the normal user vela.

  • Group: Run Application-level Gateway as a member of this group.

  • Chroot directory: Change root to this directory before reading the configuration file.

  • Manage capabilities: It is a whitespace-separated list of capability names to replace the ambient capability set of firewall processes.

6.3.8. Instance parameters — miscellaneous

Instance parameters can be set on the tabs of the Edit Instance Parameters window. The Miscellaneous tab has the following parameters:

Instance parameters — miscellaneous

Figure 6.22. Instance parameters — miscellaneous


  • File descriptor limit minimum: The number of open file descriptors that each firewall proccess could have.

  • SSL Crypto engine: It defines the OpenSSL cryptographic engine to be used for hardware accelerated cryptographic support.

6.3.9. Procedure – Increasing the number of running processes

To increase the number of running processes in a Application-level Gateway instance, complete the following steps.

Steps: 

  1. Navigate to the Instances tab of the Application-level Gateway MC component on the PNS host.

  2. Select the instance you want to modify, and click Edit parameters.

  3. Uncheck the Use default parameters option.

  4. On the General tab, adjust the Number of processes option, and click Ok.

  5. Commit and upload your changes.

  6. Reload the instance.

  7. Start the instance.

6.4. Application-level Gateway services

Services define the traffic that can pass through the firewall. A service is not a software component, but a group of parameters that describe what kind of traffic should Application-level Gateway accept or deny, and how to handle the accepted traffic. The service specifies how thoroughly the traffic is analyzed (packet filter or application level), the protocol of the traffic (for example, HTTP, FTP, and so on), if the traffic is TLS-encrypted (and also related security settings like accepted certificates), NAT policies applied to the connections, and many other parameters.

Packet-filter services forward the incoming packets using the netfilter framework provided by Linux kernel. Application-level services create two separate connections on the two sides of Application-level Gateway (client–Application-level Gateway, Application-level Gateway–server) and analyze the traffic on the protocol level. Only application-level services can perform content filtering, authentication, and other advanced features.

The following types of services are available in Application-level Gateway:

  • Service: It inspects the traffic on application level using proxies. For the highest available security, use application-level inspection whenever possible. For details, see Procedure 6.4.1, Creating a new service

  • PFService: It inspects the traffic only on packet level. Use packet-level filtering to transfer very large amount of UDP traffic (for example, streaming audio or video). For details, see Procedure 6.4.2, Creating a new packet filtering Service (PFService).

  • DenyService: It allows to make a service unavailable for any reason (for example, accessing is prohibited in certain zones). For details, see Procedure 6.4.3, Creating a new DenyService.

  • DetectorService: It attempts to determine the protocol used in the connection from the traffic itself, and to start a specified service. Currently it can detect HTTP, SSH, and SSL traffic. For HTTPS connections, it can also select a service based on the certificate of the server. For details, see Procedure 6.4.4, Creating a new DetectorService.

Services are managed from the Services tab of the Application-level Gateway MC component. The left side of the tab displays the configured services, while the right side shows the parameters of the selected service. Use this tab to delete unwanted services, modify existing ones, or create new ones.

6.4.1. Procedure – Creating a new service

To create a new service that inspects a traffic on the application level, complete the following steps.

  1. Navigate to the Services tab of the Application-level Gateway MC component and click New.

    Creating a new service

    Figure 6.23. Creating a new service


  2. Enter a name for the service into the opening dialog. Use clear, informative, and consistent service names. It is recommended to include the following information in the service name:

    • source zones, indicating which clients may use the service (for example, intranet)

    • the protocol permitted in the traffic (for example, HTTP)

    • destination zones, indicating which servers may be accessed using the service (for example, Internet)

    Tip

    Name the service that allows internal users to browse the Web intra_HTTP_internet. Use dots to indicate child zones, for example, intra.marketing_HTTP_inter.

  3. Click in the Class field and select Service.

  4. In the Proxy class field, select the application-level proxy that will inspect the traffic. Only traffic corresponding to the selected protocol and the settings of the proxy class can pass the firewall.

    Note

    Application-level Gateway has many proxy classes available by default. These can be used as is, or can be customized if needed.

    • For details on customizing proxy classes, see Section 6.6, Proxy classes.

    • The settings and parameters of the proxy classes are detailed in the Chapter 4, Proxies in Proxedo Network Security Suite 2 Reference Guide.

    • To permit any type of Layer 7 traffic, select PlugProxy. The PlugProxy is a protocol-independent proxy.

  5. Optional Step: If the inspected traffic will be SSL- or TLS-encrypted, select the Encryption Policy to use in the Encryption Policy field. For details, see Section 6.7.3, Encryption policies.

  6. Optional Step: In the Routing section, select the method used to determine the IP address of the destination server. For details, see Section 6.4.5, Routing — selecting routers and chainers.

  7. Optional Step: In the NAT section, the Network Address Translation policy used to NAT the address of the client (SNAT), the server (DNAT), or both. For details, see Section 6.7.10, NAT policies.

    Note

    To remove a policy from the service, select the empty line from the combobox.

    Note

    NAT policies cannot be used in packet filtering services (PFServices) for IPv6 traffic.

  8. Optional Step: In the Chainer field, select the method used to connect to the destination server. See Section 6.4.5, Routing — selecting routers and chainers for details.

  9. Optional Step: To specify exactly which zones can be accessed using the service, click Routing > Limit > ... and select the permitted zones. If this option is set, the target server must be located in the selected zones, otherwise Application-level Gateway will reject the connection.

    Note

    The zone set in the Limit option is the actual location of the target server. This is independent from the destination address of the client-side connection.

    This option replaces the functionality of the inband_services parameter of the zone.

  10. Optional Step: In the Authentication section, select the authentication and authorization policies used to verify the identity of the client. See Chapter 15, Connection authentication and authorization for details.

  11. Optional Step: In the Advanced > Resolver policy field, select how Application-level Gateway should resolve the addresses of the client requests. See Section 6.7.11, Resolver policies for details.

  12. Optional Step: To limit how many clients can access the service at the same time, set the Advanced > Limit concurrency option. By default, Application-level Gateway does not limit the number of concurrent connections for a service (0).

  13. Optional Step: To send keep-alive messages to the server, to the client, or to both, to keep the connection open even if there is no traffic, set the Advanced > Keepalive option to V_KEEPALIVE_SERVER, V_KEEPALIVE_CLIENT, or V_KEEPALIVE_BOTH.

  14. Commit your changes.

6.4.2. Procedure – Creating a new packet filtering Service (PFService)

To create a new packet filter service that inspects a traffic on the packet level, complete the following steps.

  1. Navigate to the Services tab of the PNS MC component and click New.

    Creating a new PFService

    Figure 6.24. Creating a new PFService


  2. Enter a name for the service into the opening dialog. Use clear, informative, and consistent service names. It is recommended to include the following information in the service name:

    • source zones, indicating which clients may use the service (for example, intranet)

    • the protocol permitted in the traffic (for example, HTTP)

    • destination zones, indicating which servers may be accessed using the service (for example, Internet)

    Tip

    Name the service that allows internal users to browse the Web intra_HTTP_internet. Use dots to indicate child zones, for example, intra.marketing_HTTP_inter.

  3. Click in the Class field and select PFService.

  4. To spoof the IP address of the client in the server-side connection (so that the target server sees as if the connection originated from the client), select the Use client address as source option.

    Note

    For IPv6 traffic, the PFService will always spoof the client address, regardless of the setting of the Use client address as source option.

  5. To redirect the connection to a fixed address, select Routing > Directed, and enter the IP address and the port number of the target server into the respective fields. You can use links as well.

  6. Optional Step: In the NAT section, the Network Address Translation policy used to NAT the address of the client (SNAT), the server (DNAT), or both. For details, see Section 6.7.10, NAT policies.

    Note

    To remove a policy from the service, select the empty line from the combobox.

    Note

    NAT policies cannot be used in packet filtering services (PFServices) for IPv6 traffic.

  7. Commit your changes.

6.4.3. Procedure – Creating a new DenyService

To create a new DenyService that prohibits access to certain services, complete the following steps.

  1. Navigate to the Services tab of the PNS MC component and click New.

    Creating a new DenyService

    Figure 6.25. Creating a new DenyService


  2. Enter a name for the service into the opening dialog. Use clear, informative, and consistent service names. It is recommended to include the following information in the service name:

    • source zones, indicating which clients may use the service (for example, intranet)

    • the protocol permitted in the traffic (for example, HTTP)

    • destination zones, indicating which servers may be accessed using the service (for example, Internet)

    Tip

    Name the service that allows internal users to browse the Web intra_HTTP_internet. Use dots to indicate child zones, for example, intra.marketing_HTTP_inter.

  3. Click in the Class field and select DenyService.

  4. To specify how Application-level Gateway rejects the traffic matching a DenyService, use the Deny IPv4 with and Deny IPv6 with options. By default, Application-level Gateway simply drops the traffic without notifying the client.

  5. Commit your changes.

6.4.4. Procedure – Creating a new DetectorService

To create a new DetectorService that starts a service based on the traffic in the incoming connection, complete the following steps.

  1. Navigate to the Services tab of the PNS MC component and click New.

    Creating a new DetectorService

    Figure 6.26. Creating a new DetectorService


  2. Enter a name for the service into the opening dialog. Use clear, informative, and consistent service names. It is recommended to include the following information in the service name:

    • source zones, indicating which clients may use the service (for example, intranet)

    • the protocol permitted in the traffic (for example, HTTP)

    • destination zones, indicating which servers may be accessed using the service (for example, Internet)

    Tip

    Name the service that allows internal users to browse the Web intra_HTTP_internet. Use dots to indicate child zones, for example, intra.marketing_HTTP_inter.

  3. In the Routing section, select the TransparentRouter option.

  4. Click in the Class field and select DetectorService.

  5. Commit your changes.

  6. Navigate to Application-level Gateway > Firewall Rules, and create a firewall rule that uses the DetectorService you created in the previous steps.

  7. Click New, select a DetectorPolicy, and select a service that Application-level Gateway will start if the traffic matches the DetectorPolicy. If you add more DetectorPolicy-Service pairs, Application-level Gateway will evaluate them in order, and start the service set for the first matching DetectorPolicy. If none of the DetectorPolicies match the traffic, Application-level Gateway terminates the connection.

    Note

    When using a DetectorService, establishing the connection is slower, because Application-level Gateway needs to evaluate the content of the traffic before starting the appropriate service. If the rate of incoming connection requests that use the DetectorService is high, the clients may experience performance problems during connection startup. Note that using a DetectorService has no effect on the performance after the connection has been established.

6.4.5. Routing — selecting routers and chainers

Routers define the target IP address and the port for the traffic. The default router, called TransparentRouter, uses the IP address requested by the client. The destination selected by a router may be later overridden by the proxy (if the Target address overridable by the proxy option of the router is enabled) and the chainer.

Routers suggest the destination IP; chainers establish the connection with the selected destination. The default ConnectChainer simply connects the destination selected by the router.

6.4.5.1. Procedure – Setting routers and chainers for a service

To set a router or a chainer for a service, complete the following steps.

  1. Navigate to the Services tab of the Application-level Gateway MC component and select the service to modify.

  2. Click Router > ... to select and configure a router. By default, the TransparentRouter is selected. The following routers are available:

    Configuring routers and chainers

    Figure 6.27. Configuring routers and chainers


  3. Click Chainer > ... to select and configure a chainer. By default, the ConnectChainer is selected. The following chainers are available:

  4. Commit your changes.

6.4.5.2. TransparentRouter

TransparentRouter does not modify the IP address and the TCP/UDP port number requested by the client; the same parameters are used on the server side.

Using the TransparentRouter

Figure 6.28. Using the TransparentRouter


Configuring TransparentRouter

Figure 6.29. Configuring TransparentRouter


The following parameters can be configured for TransparentRouter:

Use client address as source

By default, Application-level Gateway uses its own IP address in the server-side connections: the server does not see the IP address of the original client. By selecting this option, Application-level Gateway mimics the original address of the client. Use this option if the server uses IP-based authentication, or the address of the client must appear in the server logs.

Using the client address in server-side connections

Figure 6.30. Using the client address in server-side connections


Note

The IP address of the client is related to the source NAT (SNAT) policy used for the service: using SNAT automatically enables the Use client address as source option in the router.

Target address overridable by the proxy

If this option is selected and the data stream in the connection contains routing information, than the address specified in the data stream is used as the destination address of the server-side connection.

Example 6.5. Overriding the target port SQLNetProxy

The Oracle SQLNet protocol can request port redirection within the protocol. Configure a service using the SQLNetProxy and the Target address overridable by the proxy router option. When a client first connects to the Oracle server, the connection is established to the IP address and the port selected by the router. However, the server can send a redirect request to the client, and the router has to reconnect to the port specified in the request of the Oracle server. This procedure is performed transparently to the client.

Note

The Target address overridable by the proxy option cannot be used with InbandRouter.

Modify target port

Use the Modify target port option to connect to a different port of the server.

Using TransparentRouter with the Modify target port option

Figure 6.31. Using TransparentRouter with the Modify target port option


Modify source port

This option defines the source port that Application-level Gateway uses in the server-side connection. The following options are available:

  • Random port above 1024: Select a random port between 1024 and 65535. This is the default behavior of every router.

  • Random port in the same group: Select a random port in the same group as the port used by the client. The following groups are defined: 0-513, 514-1024, 1025–.

  • Client port: Use the same port as the client.

  • Specified port: Use the port set in the spinbutton.

6.4.5.3. DirectedRouter

DirectedRouter directs all connections to fixed addresses, regardless of the address requested by the client.

Using the DirectedRouter

Figure 6.32. Using the DirectedRouter


Configuring DirectedRouter

Figure 6.33. Configuring DirectedRouter


To add a destination address, click New, and enter the IP address and the port number of the server to connect to. If you set additional addresses, and the first address is unreachable, Application-level Gateway tries to connect to the next server. It is also possible to connect to the listed destinations in a round-robin fashion, see Section 6.4.5.7, RoundRobinChainer for details.

Tip

Use DirectedRouter for servers publicly available in the DMZ. That way outsiders do not know the real IP addresses of the servers — the servers are not even required to have public, routable IP addresses.

Note

If the server IP addess of the DirectedRouter is matched with the configured GeoIpPolicy for a given service, connection to the server will be established.

DirectedRouter has the following options:

Use client address as source

By default, Application-level Gateway uses its own IP address in the server-side connections: the server does not see the IP address of the original client. By selecting this option, Application-level Gateway mimics the original address of the client. Use this option if the server uses IP-based authentication, or the address of the client must appear in the server logs.

Using the client address in server-side connections

Figure 6.34. Using the client address in server-side connections


Note

The IP address of the client is related to the source NAT (SNAT) policy used for the service: using SNAT automatically enables the Use client address as source option in the router.

Target address overridable by the proxy

If this option is selected and the data stream in the connection contains routing information, than the address specified in the data stream is used as the destination address of the server-side connection.

Example 6.6. Overriding the target port SQLNetProxy

The Oracle SQLNet protocol can request port redirection within the protocol. Configure a service using the SQLNetProxy and the Target address overridable by the proxy router option. When a client first connects to the Oracle server, the connection is established to the IP address and the port selected by the router. However, the server can send a redirect request to the client, and the router has to reconnect to the port specified in the request of the Oracle server. This procedure is performed transparently to the client.

Note

The Target address overridable by the proxy option cannot be used with InbandRouter.

Modify source port

This option defines the source port that Application-level Gateway uses in the server-side connection. The following options are available:

  • Random port above 1024: Select a random port between 1024 and 65535. This is the default behavior of every router.

  • Random port in the same group: Select a random port in the same group as the port used by the client. The following groups are defined: 0-513, 514-1024, 1025–.

  • Client port: Use the same port as the client.

  • Specified port: Use the port set in the spinbutton.

6.4.5.4. InbandRouter

The InbandRouter determines the target address from the information embedded in the transferred protocol. This is possible only for protocols that can have routing information within the data stream. Application-level Gateway can use InbandRouter with the HTTP and FTP protocols.

Configuring InbandRouter

Figure 6.35. Configuring InbandRouter


The InbandRouter has the following options:

Use client address as source

By default, Application-level Gateway uses its own IP address in the server-side connections: the server does not see the IP address of the original client. By selecting this option, Application-level Gateway mimics the original address of the client. Use this option if the server uses IP-based authentication, or the address of the client must appear in the server logs.

Using the client address in server-side connections

Figure 6.36. Using the client address in server-side connections


Note

The IP address of the client is related to the source NAT (SNAT) policy used for the service: using SNAT automatically enables the Use client address as source option in the router.

Modify source port

This option defines the source port that Application-level Gateway uses in the server-side connection. The following options are available:

  • Random port above 1024: Select a random port between 1024 and 65535. This is the default behavior of every router.

  • Random port in the same group: Select a random port in the same group as the port used by the client. The following groups are defined: 0-513, 514-1024, 1025–.

  • Client port: Use the same port as the client.

  • Specified port: Use the port set in the spinbutton.

6.4.5.5. ConnectChainer

ConnectChainer attempts to connect to the destination defined by the router. It terminates the connection if the destination server is unreachable. ConnectChainer has the following options:

Connection timeout

Assume that the server is unreachable when it cannot be connected within the time set in the Connection timeout, specified in milliseconds.

Protocol action

It defines the type of the protocol used in the server-side connection. The default is to use the same protocol as on the client side, but PNS can enforce the use of TCP or UDP protocol.

6.4.5.6. FailoverChainer

FailoverChainer is similar to ConnectChainer, and attempts to connect to the destination defined by the router. However, if the destination is unreachable and the service uses DirectedRouter, Application-level Gateway attempts to connect to the next destination set in the router.

Tip

FailoverChainer can be used to implement a simple high availability support for the protected servers.

Configuring FailOverChainer

Figure 6.37. Configuring FailOverChainer


FailoverChainer has the following options:

Keep availability state for

Application-level Gateway does not try to connect to an unreachable server until the time set in the Keep availability state for option expires.

Connection timeout

Assume that the server is unreachable when it cannot be connected within the time set in the Connection timeout, specified in milliseconds.

Protocol action

It defines the type of the protocol used in the server-side connection. The default is to use the same protocol as on the client side, but PNS can enforce the use of TCP or UDP protocol.

6.4.5.7. RoundRobinChainer

RoundRobinChainer is similar to FailoverChainer, but Application-level Gateway directs each incoming connection to the next available server. That way the load is balanced between the servers set in the destination list of the router. Use the RoundRobinChainer together with the DirectedRouter.

Tip

RoundRobinChainer can be used to implement simple load balancing for the protected servers.

Configuring RoundRobinChainer

Figure 6.38. Configuring RoundRobinChainer


RoundRobinChainer has the following options:

Keep availability state for

Application-level Gateway does not try to connect to an unreachable server until the time set in the Keep availability state for option expires.

Connection timeout

Assume that the server is unreachable when it cannot be connected within the time set in the Connection timeout, specified in milliseconds.

Protocol action

It defines the type of the protocol used in the server-side connection. The default is to use the same protocol as on the client side, but PNS can enforce the use of TCP or UDP protocol.

6.4.5.8. SidestackChainer

SidestackChainer does not connect to the server, but passes the traffic to the specified proxy. It is possible to sidestack several proxies that way. Application-level Gateway connects to the server using a regular chainer after the proxy processes the traffic. SidestackChainer has the following options:

Configuring SidestackChainer

Figure 6.39. Configuring SidestackChainer


Side-stacked proxy

It is the proxy class that will process the traffic. To select a proxy, click New, select a proxy class, then click Select.

Final chainer

This is the chainer used when connecting the server.

6.4.5.9. AvailabilityChainer

The AvailabilityChainer enables establishing connection with multiple target addresses and using information from the Availability Checker daemon. The Availability Checker daemon monitors the servers if they are up or down, and based on that information the AvailabilityChainer can automatically target adresses which are in up state. The AvailabilityChainer connects to target hosts in the order they have been specified. Use the AvailabilityChainer together with the DirectedRouter.

Configuring AvailabilityChainer

Figure 6.40. Configuring AvailabilityChainer


To enable the AvailabilityChainer, the Availability Checker component has to be activated and configured. For more information on the Availability Checker component, see Section 12.6, Availability Checker.

6.4.5.10. RoundRobinAvailabilityChainer

The RoundRobinAvailabilityChainer establishes a load balancing solution, using information from the Availability Checker daemon. The Availability Checker daemon monitors the servers if they are up or down, and based on that information the RoundRobinAvailabilityChainer can automatically target the next available host in up state. The RoundRobinChainer connects to hosts in an order based on the information from the Availability Checker daemon. Use the RoundRobinAvailabilityChainer together with the DirectedRouter.

Configuring RoundRobinAvailabilityChainer

Figure 6.41. Configuring RoundRobinAvailabilityChainer


To enable the RoundRobinAvailabilityChainer, the Availability Checker component has to be activated and configured. For more information on the Availability Checker component, see Section 12.6, Availability Checker.

6.5. Configuring firewall rules

6.5.1. Understanding Application-level Gateway firewall rules

Application-level Gateway firewall rules are managed on the <Host> > Application-level Gateway > Firewall Rules page. The following information is displayed for every rule:

Configuring Firewall rules

Figure 6.42. Configuring Firewall rules


Note

Not every column is displayed by default. To show or hide a particular column, right-click on the header of the table and select the column from the menu.

Whether a certain rule is active or not is visible by its colour, that is, it is dark-grey if the rule is active and it is light-grey if the rule is inactive.

  • ID: It is the unique ID number of the firewall rule.

  • Tags: These are the tags (labels) assigned to the firewall rule. For details on assigning tags to rules, see Procedure 6.5.5, Tagging firewall rules.

  • Protocol: It is the transport protocol used in the connection. This is the protocol used in the transport layer (Layer 4) of the OSI model. For example, TCP, UDP, ICMP, and so on.

  • VPN: The rule permits traffic only from the listed VPN connections (or IPSec connections with the specified Request ID).

  • Source Zone/Subnet: The rule permits traffic only for the clients of the listed zones and subnets.

  • Source Port: The rule permits traffic only for connections targeting the listed ports of the firewall host.

  • Destination Zone/Subnet: The rule permits traffic only for connections that target addresses of the listed zones and subnets.

  • Destination Interface/Group: The rule permits traffic only for connections that target an existing IP address of the selected interface (or interface group) of the firewall host. This parameter can be used to provide nontransparent service on an interface that received its IP address dynamically.

  • Destination Port: The rule permits traffic only for connections that target the listed ports of the destination address.

  • Service: The name of the service is provided here used to inspect the traffic.

  • Instance: The service started by the rule belongs to the instance shown.

  • Description: It provides a description of the rule.

  • ICMP type and code: ICMP type determines what the ICMP packet is used for. If the type does not have any codes defined, the code field is set to zero.

6.5.1.1. Evaluating firewall rules

When Application-level Gateway receives a connection request from a client, it tries to select a rule matching the parameters of the connection. The following parameters are considered.

Name in MC Name in policy.py
VPN reqid
Source Interface src_iface
Source Interface Group src_ifgroup
Protocol proto
ICMP type icmp_type
ICMP code icmp_code
Source Port src_port
Destination Port dst_port
Source Subnet src_subnet
Source Zone src_zone
Destination Subnet dst_subnet
Destination Interface dst_iface
Destination Interface Group dst_ifgroup
Destination Zone dst_zone

Table 6.2. Evaluated Rule parameters


If a connections matches multiple rules, then the rule with the most-specific match is selected. Selecting the most specific rule is based on the following method.

  • The order of the rules is not important.

  • The parameters of the connection act as filters: if you do not set any parameters, the rule will match any connection.

  • If multiple connections would match a connection, the rule with the most-specific match is selected.

    For example, you have configured two rules: the first has the Source Zone parameter set as the office (which is a zone covering all of your client IP addresses), the second has the Source Subnet parameter set as 192.168.15.15/32. The other parameters of the rules are the same. If a connection request arrives from the 192.168.15.15/32 address, Application-level Gateway will select the second rule. The first rule will match every other client request.

  • Application-level Gateway considers the parameters of a connection in groups. The first group is the least-specific, the last one is the most-specific. The parameter groups are listed below.

  • The parameter groups are linked with a logical AND operator: if parameters of multiple groups are set in a rule, the connection request must match a parameter of every group. For example, if both the Source Interface and Destination Port are set, the connection must match both parameters.

  • Parameters within the same group are linked with a logical OR operator: if multiple parameters of a group are set for a rule, the connection must match any one of the parameters. If there are multiple similar rules, the rule with the most specific parameter match for the connection will be selected.

    Note

    In general, avoid using multiple parameters of the same group in one rule, as it may lead to undesired side-effects. Use only the most specific parameter matching your requirements.

    For example, suppose that you have a rule with the Destination Zone parameter set, and you want to create a similar rule for a specific subnet of this zone. In this case, create a new rule with the Destination Subnet parameter set, do not set the Destination Zone parameter in both rules. Setting the Destination Zone parameter in both rules and setting the Destination Subnet parameter in the second rule would work for connections targeting the specified subnet, but it would cause Application-level Gateway to reject the connections that target other subnets of the specified destination zone, because both rules would match for the connection.

  • The parameter groups are the following from the least specific to the most specific ones. Parameters within each group are listed from left to right from the least specific to the most specific ones.

    1. Destination Zone > Destination Interface Group > Destination Interface > Destination Subnet

    2. Source Zone > Source Subnet

    3. Destination Port (Note that port is more specific than port range.)

    4. Source Port (Note that port is more specific than port range.)

    5. Protocol

    6. Source Interface Group > Source Interface > VPN

  • If no matching rule is found, Application-level Gateway rejects the connection.

    Note

    It is possible to create rules that are very similar, making debugging difficult.

6.5.2. Transparent and non-transparent traffic

Application-level Gateway can handle traffic both transparently and non-transparently. In transparent mode, the clients address the destination servers directly, without being aware that the traffic is handled by Application-level Gateway. In nontransparent mode, the clients address Application-level Gateway, that is, the destination address of the client's connection is an IP address of the firewall host, and Application-level Gateway determines the address of the destination server.

There are two methods to determine the destination address in nontransparent mode:

6.5.3. Procedure – Finding firewall rules

  1. Navigate to the Firewall Rules tab of the Application-level Gateway MC component.

  2. Use the filter bar above the rule list, to find rules. The use of the filter bar is described in Section 3.3.10, Filtering list entries. You can search for the parameters listed in Section 6.5.1, Understanding Application-level Gateway firewall rules.

    Finding rules

    Figure 6.43. Finding rules


  3. Press Enter. The rules matching your search criteria are displayed.

    Note

    To save the list of matching rules into a file, click Export to CSV. Note that only the visible columns will be included in the file, in the order they are displayed.

6.5.4. Procedure – Creating firewall rules

Purpose: 

Firewall rules allow a specific type of traffic to pass the firewall. To create a new firewall rule, complete the following steps.

Steps: 

  1. Login to MS and select <Host> > Application-level Gateway > Firewall Rules > New. A new window opens.

    Creating firewall rules

    Figure 6.44. Creating firewall rules


  2. Select the Conditions tab.

    Setting connection parameters

    Figure 6.45. Setting connection parameters


  3. Select the Transport protocol used in the connection. This is the protocol used in the transport layer (Layer 4) of the OSI model. The following protocols are supported: TCP, UDP, ICMP, IGMP, DCCP, GRE, ESP, AH, SCTP, and UDP-Lite.

    • To permit both TCP and UDP traffic, select TCP or UDP.

    • To permit any Layer 4 protocol, select Any.

    • For ICMP traffic, you can specify the permitted type and subtype (code) as well.

  4. Select the sources.

    Application-level Gateway can limit the traffic that can pass the firewall only to traffic that comes from selected source networks. To permit traffic only from specific networks, select Sources > Add > <Type-of-network>. You can select zones, IPv4 or IPv6 subnets, interfaces, interface groups, and ports. Use always the most specific source suitable for your rule.

    Note

    To specify multiple ports, separate the ports with a comma, for example: 80,443

    To specify a port range, use a colon, for example: 2000:2100

    To specify multiple port ranges, separate the port ranges with commas, for example: 2000:2100,2200:2400.

  5. Select the destinations.

    Application-level Gateway can limit the traffic that can pass the firewall only to traffic that is targeting selected destination addresses. To permit traffic only to specific networks, select Destinations > Add > <Type-of-network>. You can select zones, IPv4 or IPv6 subnets, interfaces, interface groups, and ports. Use always the most specific destination suitable for your rule.

    Note

    For rules that start nontransparent services, set the destination address and the port to an address of the firewall host.

    Note

    To specify multiple ports, separate the ports with a comma, for example: 80,443

    To specify a port range, use a colon, for example: 2000:2100

    To specify multiple port ranges, separate the port ranges with commas, for example: 2000:2100,2200:2400.

    Note

    It is not mandatory to set the sources and destinations. Sources and destinations act as a filter, they limit access to the clients or servers of the sources and destinations. A firewall rule without sources and destinations acts as a rule that simply forwards traffic between any client and destination.

  6. Select the service to use.

    Select Service > Service and select the service to start for connections matching the rule. The service determines the type of traffic that will be permitted by this rule (for example, HTTP, FTP, and so on) and also the level, the traffic will be inspected on (application or packet filter level).

    Selecting the service

    Figure 6.46. Selecting the service


    Note

    Proxy services can be used only if the Condition > Transport protocol option is set to TCP, UDP, or TCP or UDP.

    Warning

    The settings and parameters of the service shown on the Service tab of the rule are for reference only. Do not modify them, because it might interfere with other rules using the same service. To modify the parameters of a service, or to create a new service, use the Services tab of the Application-level Gateway MC component.

  7. Select the instance the service should run in.

  8. Optional Step: By default, new rules become active when the configuration is applied. To create a rule without activating it, deselect the Active option of the rule.

  9. Optional Step: To limit the number of connections that can be started by the rule, configure rate limits for the connections. For details, see Procedure 6.5.7, Connection rate limiting.

  10. Click OK, then commit your changes.

    Expected result: 

    A new firewall rule is created and added to the list of firewall rules. If the rule is active, the traffic specified in the rule can pass the firewall.

6.5.5. Procedure – Tagging firewall rules

Purpose: 

To add a tag to a firewall rule, complete the following steps. Tagging rules is useful, for example, to identify rules that belong to the same type of traffic.

Steps: 

  1. Navigate to <Host> > Application-level Gateway > Firewall Rules.

  2. Select the rule to tag and click Edit.

    Editing rules

    Figure 6.47. Editing rules


  3. Click Tags. The list of available tags is displayed on the left; the tags assigned to the rule are displayed on the right.

    • To create a new tag, click New, enter the name of the tag, then click OK.

    • To assign a tag to the rule, select a tag and click Assign.

    Note

    Tags that are already assigned to the rule are not shown in the Available tags list.

    Tagging rules

    Figure 6.48. Tagging rules


  4. Click OK, then commit your changes.

    Expected result: 

    The selected tags are assigned to the rule.

6.5.6. Configuring nontransparent rules with inband destination selection

When using inband destination selection, Application-level Gateway extracts the address of the destination server from the traffic. Note the following points:

  • For HTTP connections, create a firewall rule that uses a nontransparent HTTP proxy and inband destination selection. Also, set the web browsers of the clients to use Application-level Gateway as a web proxy.

  • If the clients use a caching web proxy for HTTP traffic, for example, Squid, and Application-level Gateway is located between the clients and the web proxy, then:

    • Create a firewall rule that uses a nontransparent HTTP proxy.

    • Set the parent_proxy and parent_proxy_port attributes of the proxy to the address of the caching proxy.

    • Use a DirectedRouter in the service to redirect the connections to the caching proxy, or use inband destination selection.

6.5.7. Procedure – Connection rate limiting

Purpose: 

To limit the maximum rate of new connections in order to prevent from Denial of Service (DoS) attacks, configure a Limit Policy on the Policies tab of the Application-level Gateway MC component and select it from the Limit policy combobox on the Service tab of the firewall rule. You can specify the rate of connections that Application-level Gateway accepts within a given time period. Connection requests above this maximum rate are denied.

Steps: 

  1. Navigate to <Host> > Application-level Gateway > Firewall Rules.

  2. Select the rule to edit, then click Edit > Service.

  3. Select a previously created Limit Policy from the Limit policy combobox.

    Tip
    If the combobox has no element or the existing elements do not fit for your purpose, create a new one on the Policies tab of the Application-level GatewayMC component. For details, see Section 6.7.7, Limit policies
    Connection rate limiting

    Figure 6.49. Connection rate limiting


6.6. Proxy classes

A proxy component is responsible for analyzing, filtering and possibly modifying the data that is passed through it.

Proxies work with data streams they receive as input and emit another (possibly altered) data stream as output. They never actually see “traffic” in the traditional sense; the details of connection establishment and management are handled by separate software components. Therefore, proxies can easily interoperate with each other or with other non-firewall software like virus filters, since data is passed among them as simple data stream.

A proxy class is the low level proxy and its configuration settings. Proxy classes are responsible for analyzing and checking the data part of packets and passing it between the client and the server. Proxy classes can be used in service definitions. Together with the other components configured as service parameters (for example, routers) they can be used to analyze/filter communication channels among network hosts. Most proxy classes are protocol-specific, meaning that they are aware of the protocol-specific information within the data stream they are processing.

There are built-in proxy classes for the most typical network traffic types, like HTTP, FTP, POP3, SMTP, telnet, and also for some less frequently used protocols, like TELNET and LDAP. They can be used in service definitions without modifications of the default class properties and their values. See the Proxedo Network Security Suite 2 Reference Guide for details.

Note

Proxies in Application-level Gateway are fully RFC-compliant so a traffic that pretends to be of a certain type but violates RFC specifications concerning the given traffic type are not proxied but automatically denied instead. For example, the HTTP proxy enforces by default the RFCs for HTTP (2616 and 1945).

Example 6.7. RFC-compliant proxying in Application-level Gateway

A good example for this rule is the CODE RED worm that infected so many IIS servers around the world: the heart of this worm was a specially formatted URL request which was not RFC-compliant but was nevertheless serviced by IIS servers that had not been patched against it. Most firewall products, even application proxy firewalls let it pass through and only the most accurate ones, like Application-level Gateway, stopped the worm, realizing that the URL request coming from the worm violated RFC rules.

If using default proxy classes and property values is not enough, it is possible to derive new classes from the original ones. Derived proxy classes inherit all the properties of the original (parent) classes and these properties can then be altered. The number of configurable parameters varies among proxies; proxy for HTTP traffic has the most. It is completely up to the administrator whether and to what extent they are used in the firewall's policy settings.

Note

Whenever you start customizing a proxy, you do not actually create a new proxy, but derive a proxy class from the selected built-in proxy implementation and configure different settings for it. You only modify the configuration, not the proxy module itself.

6.6.1. Customizing proxies

Default proxy classes provide an adequate level of security. Own proxy classes are typically derived from these default proxy classes in case there is need to change the values of certain attributes, like, for example, manually setting the content of the request headers leaving the HTTP proxy (like browser type, operating system). Complex proxy setups – such as virus filtering of HTTP, SMTP, traffic or proxy stacking – also require derived classes.

This process is somewhat complex involving many steps; therefore it will be demonstrated using an example of changing the User-Agent HTTP request header output by a custom HTTP proxy component.

The customized proxy class you are defining is based on an already defined proxy class. There are quite a lot of predefined proxy classes that are available by default. For some protocols (for example HTTP and FTP) there are more than one to choose from, each with a specific intended purpose. FtpProxyRO, for instance, is for read-only FTP access, while FtpProxyRW is for read/write FTP access.

6.6.1.1. Procedure – Derive a new proxy class

  1. Select the Proxies tab of the Application-level Gateway component.

    The Application-level Gateway class configuration window that appears is empty by default.

    Deriving new proxy classes

    Figure 6.50. Deriving new proxy classes


  2. Click New.

  3. Select a predefined proxy class template for the customized proxy class.

    Default proxy templates

    Figure 6.51. Default proxy templates


    Proxy class names are typically descriptive, but most of them come with a detailed description as well. For more details, see the Proxedo Network Security Suite 2 Reference Guide. These descriptions either explicitly tell what the given proxy class is for, or suggest attributes of the class that can be configured to achieve a special purpose.

  4. Enter a name for the new proxy. It is recommended to use capital names that imply the functions the proxy is responsible for, for example VirusHTTP or HTTPSproxy.

  5. Add the attributes to be configured and modify attribute values for the given class. For details, see Procedure 6.6.1.2, Customizing proxy attributes.

    Note

    To have the new proxy class fully functional, you have to configure it as a service parameter of the given service.

6.6.1.2. Procedure – Customizing proxy attributes

What attribute-level configuration is needed depends on the exact requirements: if you simply need an FTP proxy that denies upload (write) requests, use the FtpProxyRO without modifications in your policy definitions – deriving a new class is unnecessary in this case.

However, if you would like to hide the browser type and operating system version information of your clients you can do it with a derived proxy class, by customizing some of its attributes. To hide browser type and operating system version information for instance, the creation of a custom User-Agent header is required. Although this may be accomplished on the client side (modifying all client web browsers), it is much easier to do with PNS.

The attributes configuration screen is divided into two main parts.

Customizing proxy attributes

Figure 6.52. Customizing proxy attributes


The upper textbox shows the list of custom, derived proxies along with the classes they were derived from (the Parent column). For the previous screenshot a simple HttpProxy, called MyHttpProxy was derived from the generic HttpProxy class.

  1. Navigate to Application-level Gateway > Proxies and select the proxy to customize.

  2. Click New under the lower table. The list of configurable attributes are displayed.

    Listing of proxy class attributes

    Figure 6.53. Listing of proxy class attributes


    Note

    A short description for each attribute is also displayed. For a complete description of proxy classes and attributes see the Proxedo Network Security Suite 2 Reference Guide.

    There are syntax rules for setting attributes properly. For more information on these rules, see the Proxedo Network Security Suite 2 Reference Guide or, to a limited extent, read all the available descriptions on the class selection screen.

    Tip

    AbstractProxy template descriptions are especially useful, since they contain the most information on syntax. For example, to set HTTP request headers in the traffic, see Section 4.6.2.2, Configuring policies for HTTP requests and responses in Proxedo Network Security Suite 2 Reference Guide.

  3. Select self.request_header attribute.

    The attribute appears in the Changed proxy attributes listing of the Application-level Gateway class configuration screen.

    The newly added self-request_header attribute

    Figure 6.54. The newly added self-request_header attribute


  4. Set the value of the attribute by clicking Edit. (The attribute Type is less relevant now.) A new window opens which is, by default, empty.

    Editing an attribute

    Figure 6.55. Editing an attribute


  5. Click the New button to define the name of the parameter you want to change.

    Modifying an attribute

    Figure 6.56. Modifying an attribute


    In this example HTTP request headers are configured. These are standardized in the corresponding RFC documentation or in any studies or literature on web server administration/programming.

    One of the request headers is called User-Agent which is the place to specify the browser type, version and operating system information. Popular statistics, such as the market share of web browsers, are based on this request header.

    By default, Application-level Gateway takes the original User-Agent header information it receives from clients and uses the same value in HTTP requests it generates.

  6. Enter User-Agent into the small dialog box to change the default behavior.

    You can see the name of the header changing (Key column), but the Type and Value columns still need to be changed.

  7. Left-click on the Type column of the row containing the previously entered User-Agent string, a drop-down list appears. In order to change the value of an existing attribute, select the type_http_hdr_change_value here, which changes the given header values.

    Selecting action type for the attribute

    Figure 6.57. Selecting action type for the attribute


  8. Click Edit to modify the Value column.

    Set the actual value of the User-Agent request header. The following window opens.

    Editing the value of the User-Agent header

    Figure 6.58. Editing the value of the User-Agent header


    This window presents another view of the attribute you are modifying now. The Type column of Figure Selecting action type for the attribute is now the first row in this window, while the Value column became the second row here; it is currently empty.

  9. Click Edit to set the Value column and enter a string.

    Editing the User-Agent header

    Figure 6.59. Editing the User-Agent header


    A string can be for example, My Browser.

    Note

    The web servers you visit from now on will see this information as the User-Agent header they receive, and may act strangely if they, or the content they serve (Java Servlets, for instance) are not prepared to handle unexpected values in User-Agent headers.

  10. The process of changing the desired proxy class attribute is complete, you can see the result in the Application-level Gateway class configuration window.

    The User-Agent request header attribute is changed

    Figure 6.60. The User-Agent request header attribute is changed


6.6.1.3. Customized proxies and the services

The proxy class parameter configured in the service definition defines what traffic passes through with the help of the given service. Different services can have different proxy classes configured, even for the same traffic type. For example, a firewall can have a number of services configured to pass HTTP traffic, each with its own, derived and customized proxy class parameter. If these proxy classes are parameterized differently, the corresponding services also behave differently for the same HTTP traffic. A single service can only have a single proxy class value configured, although that proxy class parameter can refer to a stacked proxy setup as well.

6.6.2. Procedure – Renaming and editing proxy classes

To modify the basic attributes, for example, the name and the parent class of a proxy, complete the following steps:

  1. Select the class and click Edit.

  2. To rename the class, edit the name of the proxy in the Proxy name field, then click Ok.

  3. To modify the parent class of the proxy, select the new class from the Proxy template tree on the left of the dialog, then click Ok.

    Warning

    As a result of modifying the parent class, an already configured proxy (that is, one that had its default values or attributes modified) looses the attributes not present in its new parent class.

6.6.3. Analyzing embedded traffic

Most Application-level Gateway proxies can pass the information received as the payload of the incoming traffic to another proxy for further analysis. This kind of complex data analysis is possible by placing a proxy inside another one. This process is called stacking. Stacking is especially useful in filtering compound traffic, a traffic that consists of two (or more) protocols or that needs to be analyzed in two different ways.

Note

Every proxy can decrypt SSL and TLS encryption without having to use another proxy. For details on configuring Application-level Gateway to handle encrypted connections, see How to configure TLS proxying in PNS 2.

Usually protocols consist of two parts:

  • control information, and

  • data.

Protocol proxies analyze and filter the control part and except for some cases they are unaware of the data part. At this point, further screening of the data might be needed, therefore proxies are able to stack in other proxies capable of filtering the data part, so the external (upper) proxy passes that data traffic to the internal (lower) proxy.

Stacking proxies

Figure 6.61. Stacking proxies


Example 6.8. Virus filtering and stacked proxies

Virus filtering is also part of the multiple analysis on traffic. It is typically performed in HTTP, POP3 and SMTP traffic, because these are the protocols, viruses generally use for spreading over the Internet (using Application-level Gateway though, it is possible to filter viruses in other protocols as well). When virus filtering is configured, a standard protocol proxy works in tandem with an antivirus engine and this way, both protocol-specific filtering and virus filtering are performed on the data if you stack the antivirus engine into some proxy.

For details on configuring virus filtering in HTTP and HTTPS traffic, see How to configure virus filtering in HTTP.

For each stacking scenarios there are a number of attributes that can be configured. For more information see the Proxedo Network Security Suite 2 Reference Guide.

6.6.3.1. Procedure – Stack proxies

Since Application-level Gateway proxies natively support TLS-encrypted connections, stacking proxies is rarely needed in Application-level Gateway. For details on configuring TLS-encrypted connections (for example, HTTPS), see How to configure TLS proxying in PNS 2. If you need to stack proxies for some reason, complete the following steps.

  1. Derive a proxy class from one of the predefined ones. For details, see Section 6.6.1, Customizing proxies.

  2. Derive a proxy class for the 'external' or parent proxy.

  3. Configure stacking on the Application-level Gateway Class configuration screen.

    Configuring stacking essentially means setting an attribute of the container proxy. The exact name of the attribute depends on the parent proxy. For details, see the Proxedo Network Security Suite 2 Reference Guide. For details on setting proxy attributes, see Procedure 6.6.1.2, Customizing proxy attributes.

6.7. Policies

The Policies tab provides a single interface for managing all the different policies used in Application-level Gateway service definitions.

Policies are independent from service definitions. A single policy can be used in several services or proxy classes. Policies must be created and properly configured before they are actually used in a service. When configuring a service, only the existing policies, that is, the previously defined ones, can be selected.

The Policies tab

Figure 6.62. The Policies tab


On the left side of the Policies tab the existing policies are displayed in a tree, sorted by policy type. If a policy is selected, its parameters are displayed on the right side of the panel.

The policies available from the Policies tab of the MC Application-level Gateway component are listed below. The subsequent sections describe the different policy types and their uses. The Authentication, Authentication Provider, and Authorization policies are discussed in Chapter 15, Connection authentication and authorization.

6.7.1. Procedure – Creating and managing policies

To create a new policy, complete the following steps.

  1. Navigate to Application-level Gateway > Policies and click the New button under the policies list.

  2. Enter a name for the policy and select the type of the new policy from Policy type combobox. If a policy or a policy group (for example, NAT policy) is selected, the combobox is automatically set to the same type.

    Defining new policies

    Figure 6.63. Defining new policies


    Note

    Custom policy classes can be created using the Class editor. However, this is only recommended for advanced users.

  3. Configure the parameters of the new class on the right side of the panel. Existing policy classes can be modified here as well.

    Tip
    • To disable a policy or a matcher, use the local menu (right-click the selected policy). Disabled policies will be generated into the configuration files as comments.

    • To duplicate a policy, use the Copy and Paste options of the local or the Edit menu.

6.7.2. Detector policies

Detector policies can be used in firewall rules with DetectorServices: they specify which service should Application-level Gateway start for a specific type of protocol or certificate found in the traffic of connection. Currently Detector policies cam detect the HTTP, SSH, and SSL protocols. In SSL/TLS-encrypted connections, Application-level Gateway can select which service to start based on the certificate of the server.

For example, you can use this feature for the following purposes:

  • to enable HTTP traffic on non-standard ports

  • to enable SSL-encrypted traffic only to specific servers or server farms that do not have public IP addresses (for example, Microsoft Windows Update servers)

  • to enable access only to specific HTTPS services (for example, enable access to Google Search (which has a certificate issued to www.google.com), but deny access to GMail (which uses a certificate issued to accounts.google.com))

Detector policies

Figure 6.64. Detector policies


Detector policies contain a detector that determines if the traffic in a connection belongs to a particular protocol or not. Firewall rules can include a list of detector-service pairs. When a client opens a connection that is handled by a DetectorService, Application-level Gateway evaluates the detectors in the Detector policy of the DetectorService, starting with the first detector. When a detector matches the traffic in the connection, Application-level Gateway starts the service set for the detector to handle the connection. If none of the detectors match the traffic, then Application-level Gateway terminates the connection.

Example 6.9. Defining a Detector policy

Python: Below is a Detector policy that detects SSH traffic in a connection.

DetectorPolicy(name="DemoSSHDetector", detector=SshDetector())
DetectorPolicy(name="DemoHTTPDetector", detector=HttpDetector()

The following firewall rule uses the DemoSSHDetector Detector policy to start the DemoSSHService service if SSH traffic is found in a connection, and drops other connections.

Rule(rule_id=1,
    proto=6,
    detect={'DemoSSHDetector' : 'demo_instance/DemoSSHService',}
    )

The following firewall rule uses the DemoSSHDetector and the DemoHTTPDetector Detector policies to start the DemoSSHService or the DemoHTTPService services if SSH or HTTP traffic is found in a connection, and drops other connections.

Rule(rule_id=1,
    proto=6,
    detect={'DemoSSHDetector' : 'demo_instance/DemoSSHService',
            'DemoHTTPDetector' : 'demo_instance/DemoHTTPService',}
    )

6.7.3. Encryption policies

6.7.3.1. Understanding Encryption policies

This section describes the configuration blocks of Encryption policies and objects used in Encryption policies. Encryption policies were designed to be flexible, and make encryption settings easy to reuse in different services.

An Encryption policy is an object that has a unique name, and references a fully-configured encryption scenario.

Encryption scenarios are actually Python classes that describe how encryption is used in a particular connection, for example, both the server-side and the client-side connection is encrypted, or the connection uses a one-sided SSL connection, and so on. Encryption scenarios also reference other classes that contain the actual settings for the scenario. Depending on the scenario, the following classes can be set for the client-side, the server-side, or both.

  • Certificate generator: It creates or loads an X.509 certificate that Application-level Gateway shows to the peer. The certificate can be a simple certificate (Section 5.5.23, Class StaticCertificate in Proxedo Network Security Suite 2 Reference Guide), a dynamically generated certificate (for example, used in a keybridging scenario, Section 5.5.12, Class DynamicCertificate in Proxedo Network Security Suite 2 Reference Guide), or a list of certificates to support Server Name Indication (SNI, Section 5.5.18, Class SNIBasedCertificate in Proxedo Network Security Suite 2 Reference Guide).

    The related parameters are: client_certificate_generator, server_certificate_generator

  • Certificate verifier: The settings in this class determine if Application-level Gateway requests a certificate of the peer and the way to verify it. Application-level Gateway has separate built-in classes for the client-side and the server-side verification settings: Section 5.5.6, Class ClientCertificateVerifier in Proxedo Network Security Suite 2 Reference Guide and Section 5.5.19, Class ServerCertificateVerifier in Proxedo Network Security Suite 2 Reference Guide. For details and examples, see Section 3.2.5, Certificate verification options in Proxedo Network Security Suite 2 Reference Guide.

    The related parameters are: client_verify, server_verify

  • Protocol settings: The settings in this class determine the protocol-level settings of the SSL/TLS connection, for example, the permitted ciphers and protocol versions, session-reuse settings, and so on. Application-level Gateway has separate built-in classes for the client-side and the server-side SSL/TLS settings: Section 5.5.10, Class ClientTLSOptions in Proxedo Network Security Suite 2 Reference Guide and Section 5.5.22, Class ServerTLSOptions in Proxedo Network Security Suite 2 Reference Guide. For details and examples, see Section 3.2.6, Protocol-level TLS settings in Proxedo Network Security Suite 2 Reference Guide.

    The related parameters are: client_tls_options, server_tls_options

Application-level Gateway provides the following built-in encryption scenarios:

  • TwoSidedEncryption: Both the client-Application-level Gateway and the Application-level Gateway-server connections are encrypted. For details, see Section 5.5.25, Class TwoSidedEncryption in Proxedo Network Security Suite 2 Reference Guide.

  • ClientOnlyEncryption: Only the client-Application-level Gateway connection is encrypted, the Application-level Gateway-server connection is not. For details, see Section 5.5.8, Class ClientOnlyEncryption in Proxedo Network Security Suite 2 Reference Guide.

  • ServerOnlyEncryption: Only the Application-level Gateway-server connection is encrypted, the client-Application-level Gateway connection is not. For details, see Section 5.5.21, Class ServerOnlyEncryption in Proxedo Network Security Suite 2 Reference Guide.

  • ForwardStartTLSEncryption: The client can optionally request STARTTLS encryption. For details, see Section 5.5.16, Class ForwardStartTLSEncryption in Proxedo Network Security Suite 2 Reference Guide.

  • ClientOnlyStartTLSEncryption: The client can optionally request STARTTLS encryption, but the server-side connection is always unencrypted. For details, see Section 5.5.9, Class ClientOnlyStartTLSEncryption in Proxedo Network Security Suite 2 Reference Guide.

  • FakeStartTLSEncryption: The client can optionally request STARTTLS encryption, but the server-side connection is always encrypted. For details, see Section 5.5.15, Class FakeStartTLSEncryption in Proxedo Network Security Suite 2 Reference Guide.

For example, on configuring Encryption policies, see How to configure TLS proxying in PNS 2. For details on HTTPS-specific problems and the related solutions, see How to configure HTTPS proxying in PNS 2.

6.7.4. GeoIP policies

In general, geoip policies encapsulate a geographical location based filtering solution for service sessions, which can be referenced using its identifier.

Note

If GeoIpPolicy is configured for a given service and the server IP address of a DirectedRouter is matched, connection to the server will be established.

The available geoip classes which are predefined in Application-level Gateway are listed below.

Apart from the predefined ones, it is also possible to create custom geoip classes. The various geoip classes and their uses are described in the subsequent sections.

6.7.5. GeoLocationLimit

A geographical location based filtering solution. The following parameters have to be specified:

GeoLocationLimit settings

Figure 6.65. GeoLocationLimit settings


  • country_list: List of countries to apply the selected action to.

  • action: Action that is applied to packets that originate from are destined to any country defined in country_list

  • exceptions: List of IP addresses or subnets that should not be considered when checking the geographical location.

  • limit: GeoPacketLimit instance for packet number based rate limiting for GeoIP.

6.7.6. GeoPacketLimit

A rate limit solution based on packet numbers. The following parameters have to be specified:

GeoPacketLimit settings

Figure 6.66. GeoPacketLimit settings


  • time_unit: Time quantum for rate limitation time dimension.

  • packet_number: Maximum allowed packet rate in the given time quantum. It determines the speed or frequency at which packages can arrive

    Example 6.10. GeoPacketLimit example settings
    • If the limit is 10/minute, and 5 packets arrive in one second, then only the first one will be accepted.

    • If the limit is 10/minute and 100 packets arrive in one minute, then approximately every tenth will be accepted evenly.

  • burst_number: Maximum initial number of packets to treated together when calculating the rate limit.

6.7.7. Limit policies

In general, limit policies encapsulate a rate limit solution for service sessions, which can be referenced using its identifier.

The available limit classes which are predefined in Application-level Gateway are listed below.

  • PacketLimit: A rate limit solution based on packet numbers.

Apart from the predefined ones, it is also possible to create custom limit classes. The various limit classes and their uses are described in the subsequent sections.

6.7.8. PacketLimit

A rate limit solution based on packet numbers. The following parameters have to be specified:

PacketLimit settings

Figure 6.67. PacketLimit settings


  • time_unit: Time quantum for rate limitation time dimension.

  • packet_number: Maximum allowed packet rate in the given time quantum. It determines the speed or frequency at which packages can arrive

    Example 6.11. PacketLimit example settings
    • If the limit is 10/minute, and 5 packets arrive in one second, then only the first one will be accepted.

    • If the limit is 10/minute and 100 packets arrive in one minute, then approximately every tenth will be accepted evenly.

  • burst_number: Maximum initial number of packets to treated together when calculating the rate limit.

  • loglimit_time_unit: Time quantum for logging rate limitation.

  • logging_limit: The rate of dropped packets before logging is triggered.

6.7.9. Matcher policies

In general, matcher policies can be used to find out if a parameter is included in a list (or which elements of a list correspond to a certain parameter), and influence the behavior of the proxy class based on the results. Matchers can be used for a wide range of tasks, for example, to determine if the particular IP address or URL that a client is trying to access is on a black or whitelist, or to verify that a particular e-mail address is valid. The matchers usable in a proxy class are described in the Proxedo Network Security Suite 2 Reference Guide.

Note

Matchers can also be used in custom proxy classes created with the Class editor.

Matcher policies

Figure 6.68. Matcher policies


Application-level Gateway has a number of predefined matcher classes; and it is also possible to make complex decisions from the results of individual matchers using the CombineMatcher class. The available predefined classes are listed below.

  • DNSMatcher: It retrieves the IP address(es) of a domain from the name server specified.

  • WindowsUpdateMatcher: It retrieves the IP addresses required for updating computers running Microsoft Windows from the name server specified.

  • RegexpMatcher: It is a general regular expression matcher.

  • RegexpFileMatcher: It completes a regular expression based matching on the content of the files.

  • SmtpInvalidRecipientMatcher: It consults a mail server to verify that an e-mail address is valid.

  • CombineMatcher: It makes complex decisions by combining the results of multiple simple matchers using logical operations.

Apart from the predefined ones, it is also possible to create custom matcher classes. The various matcher classes and their uses are described in the subsequent sections. The use of matchers in proxy classes is discussed in Section 6.7.9.7, Using matcher classes in proxy classes.

6.7.9.1. Matching domain names with DNSMatcher

DNSMatcher retrieves the IP addresses of domain names. This can be used in domain name based policy decisions, for example to allow encrypted connections only to trusted e-banking sites. If the IP address of the name server is not specified in the DNS Server field, the name server set in the Networking component is used (see Section 5.3, Managing client-side name resolution for details).

Domain name resolution is completed on-demand basis at each PNS startup by default, so that unnecessary slowdown with the startup can be avoided. In order to have domain name resolution at each startup, the resolve_on_init parameter has to be checked in.

Note

Note that in case the zones or the matchers contain unresolvable elements, it may increase the waiting time for a timeout.

It is recommended to have a locally installed caching DNS service which is capable of providing fast responses, monitored with the used domains.

Example 6.12. DNSMatcher for two domain names
Sample DNSMatcher policy

Figure 6.69. Sample DNSMatcher policy


Python:
MatcherPolicy(name="ExampleDomainMatcher", matcher=DNSMatcher(server="dns.example.com",\
hosts=("example2.com", "example3.com")))

6.7.9.2. WindowsUpdateMatcher

WindowsUpdateMatcher is actually a DNSMatcher used to retrieve the IP addresses currently associated with the v5.windowsupdate.microsoft.nsatc.net, v4.windowsupdate.microsoft.nsatc.net, and update.microsoft.nsatc.net domain names; only the IP address of the name server has to be specified. Windows Update is running on a distributed server farm, using the DNS round robin method and a short TTL to constantly change the set of servers currently visible, consequently the IP addresses of the servers are constantly changing.

Tip

This matcher class is useful to create firewall policies related to updating Windows-based machines. Windows Update is running over HTTPS. For example, there is no real use in inspecting the HTTP traffic embedded into the SSL tunnel (since it is mostly file download), but it is important to verify the identity of the servers.

6.7.9.3. RegexpMatcher

A RegexpMatcher consists of two string lists, one describing the regular expressions to be found (Match list) and, optionally, another list of expressions that should be ignored (Ignore list) when processing the input. By default, matches are case insensitive. For case sensitive matches, uncheck the Ignore case option.

Note

The string lists are stored in the policy.py configuration file.

Example 6.13. Defining a RegexpMatcher
Sample RegexpMatcher

Figure 6.70. Sample RegexpMatcher


The matcher below defines a RegexpMatcher called Smtpdomains, with only the smtp.example.com domain in its match list.

Python:
MatcherPolicy(name="Smtpdomains", matcher=RegexpMatcher\
(match_list=("smtp.example.com",), ignore_list=None))

6.7.9.4. RegexpFileMatcher

RegexpFileMatcher is similar to RegexpMatcher, the two lists are though stored in separate files. The matcher itself stores only the paths and the filenames to the lists. The files themselves can be managed either manually, or by using the FreeText plugin.

6.7.9.5. Verifying e-mail addresses with the SmtpInvalidMatcher

This matcher class uses an external SMTP server to verify that a given address (that is, the recipient of an e-mail) exists. The following parameters have to be specified:

Configuring SmtpInvalidMatcher

Figure 6.71. Configuring SmtpInvalidMatcher


  • Server name and Server port: These fields identify the domain name and the port used by the SMTP server to be queried.

  • Bind address: Application-level Gateway uses this IP address as the source address of the connection when connecting the SMTP server to be queried.

  • Cache timeout: The result of a query is stored for the period set as Cache timeout in seconds. If a new e-mail is sent to the same recipient within this interval, the stored verification result is used.

  • Force delivery attempt and Sender address: Send an e-mail to the recipient as if it were sent by Sender address. If the destination mail server accepts this mail, the original e-mail is sent. This option is required because many mail server implementations do not support the VRFY SMTP command properly, or it is not enabled. (The default value for Sender address is <>, which refers to the mailer daemon and is recognized by virtually all servers.)

6.7.9.6. Making complex decisions with the CombineMatcher

CombineMatcher uses the results of multiple matchers to make a decision. The results of the individual matchers can be combined using the logic operations AND, OR and NOT. Both existing matcher policies (Matcher policy) and policies defined only within the particular CombineMatcher (Matcher instance) can be used.

CombineMatcher

Figure 6.72. CombineMatcher


New matchers can be added with the Add button. Each line in the main window of the CombineMatcher configuration panel corresponds to a matcher. To use an existing matcher policy, set the combobox on the left to Matcher policy. After that, clicking the ... icon opens a dialog window where the matcher to be used can be selected.

Tip

Matchers can also be configured whenever needed: set the combobox to Matcher instance, click on the ... icon, and select and configure the matcher as required.

Each matcher can be taken into consideration either with its regular, or with its inverted result using the Not checkbox. The individual matchers (or their inverses) can be combined with the logical AND, and OR operations. This can be set in the Logic combobox:

  • If all criteria are met: It corresponds to the logical AND; the CombineMatcher will return the TRUE value only if all criteria are TRUE.

  • If any criteria are met: It corresponds to the logical OR; the CombineMatcher will return the TRUE value if at least one criteria is TRUE.

  • If all criteria are the same: CombineMatcher will return the TRUE value if all criteria have the same value (either TRUE or FALSE).

Tip

A CombineMatcher can also be used to combine the results of other CombineMatchers, thus very complex decisions can also be made.

Example 6.14. Blacklisting e-mail recipients

A simple use for CombineMatcher is to filter the recipients of e-mail addresses using the following process:

  1. An SmtpInvalidMatcher (called SmtpCheckrecipient) verifies that the recipient exists.

  2. A RegexpMatcher (called SmtpWhitelist) or RegexpFileMatcher is used to check if the address is on a predefined list. This list is either a whitelist (permitted addresses) or a blacklist (addresses to be rejected).

  3. A CombineMatcher (called SmtpCombineMatcher) sums up the results of the matchers with a logical AND operation (If all criteria are met). If the list of the RegexpMatcher is a blacklist, the result of the RegexpMatcher should be inverted by checking the Not checkbox.

  4. An SmtpProxy (called SmtpRecipientMatcherProxy) references SmtpCombineMatcher in its recipient_matcher attribute.

Python:
class SmtpRecipientMatcherProxy(SmtpProxy):
  recipient_matcher="SmtpCombineMatcher"
  def config(self):
    SmtpProxy.config(self)

MatcherPolicy(name="SmtpCombineMatcher", matcher=CombineMatcher\
(expr=(V_AND, "SmtpCheckrecipient", "SmtpWhitelist")))
MatcherPolicy(name="SmtpWhitelist", matcher=RegexpMatcher\
(match_list=("info@example.com",), ignore_list=None))
MatcherPolicy(name="SmtpCheckrecipient", matcher=SmtpInvalidRecipientMatcher\
(server_port=25, cache_timeout=60, attempt_delivery=FALSE, \
force_delivery_attempt=FALSE, server_name="recipientcheck.example.com"))

6.7.9.7. Using matcher classes in proxy classes

To actually use the defined matchers, they have to be specified in the proper attributes of the particular proxy class. The proxy classes can use matchers for a variety of tasks. The matchers to be used by the particular proxy class can be added as Changed class attributes.

Using matchers in proxy classes

Figure 6.73. Using matchers in proxy classes


For example, SmtpProxy can use three different matchers: one to check the relayed domains (for example, using a RegexpMatcher), and two SmptInvalidMatchers (one for sender, one for the recipient address verification). For the details of the matchers usable by a given proxy class see the attributes of the class in Chapter 4, Proxies in Proxedo Network Security Suite 2 Reference Guide.

Example 6.15. SmtpProxy class using a matcher for controlling relayed zones
Python:
class SmtpMatcherProxy(SmtpProxy):
  relay_domains_matcher="Smtpdomains"
  def config(self):
    SmtpProxy.config(self)

MatcherPolicy(name="Smtpdomains", matcher=RegexpMatcher(match_list=("example.com",),\
 ignore_list=None))

6.7.10. NAT policies

Network Address Translation (NAT) is a technology that can be used to change source or destination addresses in a connection from one IP address to another one.

Today, most corporate networks work with private, non-Internet-routable IP addresses from the 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 ranges. These IP addresses are suitable for intranet communication (they can even be routed internally) but cannot be used on the Internet. Therefore a mechanism, NAT, is in place, that, whenever an internal client wants to access Internet resources, uses a public IP address for this purpose.

Another use for NAT is address hiding. Towards the Internet only a limited number of public IP addresses are visible, while you may have thousands of internal clients and servers all “translated” to those few addresses.

Tip

Address hiding with the help of NAT is especially useful for published servers (usually located in a DMZ). The potential attackers can see only a few public IP addresses (or even a single one) but from this information the actual location and the number of the servers, or even their configuration details remain hidden.

Note

Although address hiding can be considered as a security feature, NAT alone is not enough to protect a network from malicious intruders. Never use NAT in itself as a security solution.

Based on whether source or destination IP addresses are transformed, two kinds of NAT can be differentiated.

  • Source NAT (SNAT) - if source IP addresses are transformed

  • Destination NAT (DNAT) - if address transformation is performed on destination IP addresses

Basically, NAT in Application-level Gateway provides a possibility to modify the source and destination IP address at the server side before the connection is established. The source and destination IP address values are defined previously by the router on the basis of the router type, its settings and the client request. In some cases, Proxy settings can also be involved, for example when the Target address overrideable by the proxy router option is enabled. NAT is responsible for changing IP addresses to some other addresses depending on the configuration of the NAT policy.

Note

This NAT is performed by the Application-level Gateway component and is totally different from the NAT offered by nftables in the Management Access component.

By default, when no NAT policy is set up, Application-level Gateway uses its own IP address configured for the corresponding network interface when sending out traffic in any direction. Although this results in many clients “using” the same (single) IP address, it is not considered as SNAT technically.

In Application-level Gateway, NAT is performed on the application proxy level. In this case it is not strictly IP address replacement, since original packets do not appear in the traffic leaving the firewall. It is an IP address modification rule rather. If application proxying is used — meaning there is no traffic that is forwarded on the packet filter level — packet filter level NAT is neither required nor recommended.

Service definitions contain NAT policy settings to configure application proxy-level NAT.

Note

If some traffic is managed at the packet filter level, NAT must be performed at the packet filter level, too. In this case, NAT is actually an IP address replacement, since the packet filter – rules permitting – forwards the original packets it receives. You are not recommended to use NAT at the packet filter level.

Originally, NAT meant only address translation, leaving port information travelling in TCP or UDP packets unmodified. This can lead to errors especially in large networks where thousands of clients are NATed by a single machine, therefore NAT technologies were modified to do port translation as well, guaranteeing that even if two internal clients try to use the same source port number in session establishments, the packets leaving the NAT server always have unique source IP address/port pairs.

Note

Port translating NAT devices are incompatible with IPSec encryption (which is a major drawback as IPSec is becoming increasingly popular) and may be incompatible with proprietary, two-channel protocols as well.

6.7.10.1. Configuring NAT in Application-level Gateway

Before you start NAT configuration you must decide whether you need it at all. If you need traffic redirection, for example a Web server in your DMZ, routers may serve your needs. By default, Application-level Gateway uses its own IP address (bound to the corresponding adapter) to all connections leaving it in any direction, unless the Use client address as source router option is set, in which case the original client IP address is used. Consequently, NAT may not be absolutely necessary.

Note

Configuring SNAT Policy for a Service automatically enables the Use client address as source router function, so during SNAT the client's address is used, not the firewall's.

As opposed to network configurations without firewalls, where NAT is a universal setting for all clients communicating with any protocol, in Application-level Gateway, different traffic can be NATed differently because NAT configurations are linked to services. It can happen that while outgoing HTTP traffic is SNATed to a single public IP address, SQL traffic from the same network is not SNATed at all, and finally FTP download traffic is SNATed to a separate NAT pool.

6.7.10.1.1. Procedure – Configuring NAT

  1. Create the required NAT policies on the Policies tab of the Application-level Gateway MC component.

    NAT policy configuration window

    Figure 6.74. NAT policy configuration window


    Click the New button, select NAT Policy from the Policy type combobox and supply a name for the new policy.

    Names should be descriptive and ideally contain information about the direction of NATing and/or about the type of traffic NATed.

    In most network configurations NAT is typically not service-specific; a generic NAT policy may be adequate for most outgoing or incoming traffic. There are no compulsory rules for naming.

    NAT policies can be renamed any time.

    Tip

    Remove NAT policies from the configuration set if they are no longer needed.

    NAT policies can be removed only if they are not used in any service definition.

  2. Configure a NAT solution.

    Application-level Gateway supports several different NAT solutions.

    Editing a GeneralNAT rule

    Figure 6.75. Editing a GeneralNAT rule


    GeneralNAT has three parameters: source subnet, destination subnet, and the translated subnet. Connections arriving from the source subnet, that target the destination subnet, are modified to use the translated subnet. If the NAT policy is used as Source-NAT (SNAT), the source subnet is translated to translated subnet, if the policy is used as Destination NAT (DNAT), the target subnet is translated.

    Sample GeneralNAT rule

    Figure 6.76. Sample GeneralNAT rule


    Note

    The original and translated netmasks do not need to be the same: it is possible to map an entire /24 network onto a single IP address (/32 mask). However, the order of the pairs is important because the Application-level Gateway processes the list from top to bottom.

    Depending on the NAT type (SNAT, DNAT), Application-level Gateway evaluates the NAT rules one after the other. If a row containing the address to be NATed as the source network, the iteration stops and Application-level Gateway modifies the IP address as specified in that row. If no match is found, the original IP address is used.

    When modifying the address, it calculates the host ID of the address using the target netmask and the source network address and adds it to the target network.

    Example 6.16. Address translation examples using GeneralNAT

    The following two tables show a number of simple and special GeneralNAT cases. The Destination Address in these cases is set to 0.0.0.0/0.

    Source network Target network Source IP address Translated IP address
    10.0.1.0/24
    192.168.1.0/24
    10.0.1.5
    192.168.1.5
    10.0.1.0/24
    192.168.1.0/25
    10.0.1.130
    192.168.1.2
    10.0.1.0/25
    192.168.1.0/24
    10.0.1.42
    192.168.1.42
    Source network Target network Original IP address New IP address
    0.0.0.0/0
    192.168.2.2/32
    172.17.3.5
    192.168.2.2
    0.0.0.0/0
    192.168.3.0/31
    172.18.1.1
    192.168.3.1
    0.0.0.0/0
    192.168.3.0/31
    172.18.1.2
    192.168.3.0
    192.168.3.1/32
    172.19.2.0/24
    192.168.3.1
    172.19.2.0
  3. Configure caching.

    Since the NAT decision may take a long time in some cases (for example, if there are many mappings in the list), the decisions can be stored in a cache. Storing the decisons in a cache accelerates future decisions. Caching can be enabled/disabled using the Cacheable checkbox in the configuration window of the NAT policy.

    Tip

    It is recommended to enable caching for complex NAT decisions.

    Note

    Enabling caching can have interesting, but sometimes unwanted effects on some NAT types, for example on RandomNAT. Using RandomNAT and the Cachable option together would result in Load Balancing, but with sticky IP addresses, the cache would remember which source IP address was used before for a specific client's IP address and would use the same address again. This can be very useful, but can also cause a lot of problems and troubleshooting.

6.7.10.2. Types of NAT policies

Application-level Gateway supports the following types of NAT policies. For details on the parameters of these NAT policies, see Section 5.9, Module NAT in Proxedo Network Security Suite 2 Reference Guide.

NAT policy Description
GeneralNAT This options means a simple mapping based on the original and desired address(es). GeneralNAT can be used to map a set of IP addresses (a subnet) to either a single IP address or to a set of IP addresses (a subnet). For details, see Section 5.9.4, Class GeneralNAT in Proxedo Network Security Suite 2 Reference Guide.
StaticNAT This option can be used to specify a single IP address/port pair to be used in address transforms. It is mainly used in DNAT configurations where incoming traffic must be directed to an internal or DMZ server that has a private IP address. Specifying port translation is optional. When used in conjunction with SNAT, StaticNAT can be used to map to IP alias(es). For details, see Section 5.9.11, Class StaticNAT in Proxedo Network Security Suite 2 Reference Guide.
RandomNAT In case of this option the firewall selects an IP address from the configured NAT pool randomly for each new connection attempt. Once a communication channel (a session) is established, subsequent packets belonging to the same session use the same IP address. The tranform of the port number used in RandomNAT can be fixed, even for each IP address used in the NAT pool separately. It is ideal when you want to distribute the load (use) of addresses in your NAT pool evenly and you do not have specific requirements for fixed address allocations such as IP based authentication. For details, see Section 5.9.10, Class RandomNAT in Proxedo Network Security Suite 2 Reference Guide.
HashNAT It maps individual IP addresses to individual IP addresses very quickly, using hash values to determine mappings and storing them in hash tables. For details, see Section 5.9.5, Class HashNAT in Proxedo Network Security Suite 2 Reference Guide.
NAT46 NAT46 embeds an IPv4 address into a specific portion of the IPv6 address, according to the NAT46 specification described in RFC6052. For details, see Section 5.9.7, Class NAT46 in Proxedo Network Security Suite 2 Reference Guide.
NAT64 NAT64 maps specific bits of the IPv6 address to IPv4 addresses according to the NAT64 specification described in RFC6052. For details, see Section 5.9.8, Class NAT64 in Proxedo Network Security Suite 2 Reference Guide.

Table 6.3. NAT solutions


6.7.10.3. NAT and services

The NAT policies created in the Policies tab can be used in service definitions. Navigate to the Services tab, select a service and choose a NAT policy as either the Source NAT policy or the Destination NAT policy service parameter.

Using a NAT policy in a service definition

Figure 6.77. Using a NAT policy in a service definition


Remember that NAT policies are independent configuration entities and come into effect only if they are used in service definitions. Also, SNAT and DNAT policies are two different and independent service parameters: it is not required to have either one or both in any service definition. One service can only use a single NAT policy (or none) as its Source and another one (or none) as its Destination NAT policy parameter. These two settings usually do not reference the same NAT policy (although this is not impossible).

In general, while all NAT policies are equal in that they are freely usable as either source or destination NAT policies in service definitions, they are typically created with their future use in mind. There is no specification on whether NAT policies are SNAT or DNAT policies: they are SNAT or DNAT policies only from the point of view of the services that are using them.

NAT policies can be reused. Any number of services can use them.

Note

Although it is often considered a security-enhancing feature, NAT is not intended for access control of any type. Instead, use proper Zone setups and service definitions for this purpose.

6.7.10.4. NAT and other policy objects

NAT in Application-level Gateway is an option to change source/destination IP address information in the server side of connections of the firewall, immediately before the connection is started. Since NAT decisions (if used) are made after all other IP address related configurations, such as router, proxy configuration, NAT can override these previous settings. NAT in Application-level Gateway can be used to shift IP address ranges, to set IP addresses and to customize these operations.

In a service definition there are potentially two different components that directly deal with IP address setting:

  • a router (compulsory),

  • and a NAT policy (or two) (optional).

In the address setting procedure the following processes are involved.

  1. Incoming connection is accepted and a new session is created.

  2. Destination address is set by the Router and using the Use client address as source option the source address of the server side connection is also set.

    Remember that the Router only gives a suggestion for the source/destination IP addresses as the proxy or the NAT can later override these suggestions.

  3. Router settings can be altered by the proxy if the Target address overrideable by the proxy option is set or InbandRouter is selected and the proxy has some protocol-based information.

  4. NAT is performed, depending on NAT types (SNAT/DNAT).

  5. Access control check is performed based on the final destination IP address decision. Check whether the service is allowed as an inbound service into the zone where the destination IP address belongs to.

  6. The connection to the server is established.

Note

When checking the inbound services of the zone, the IP address to which the firewall actually connects to is considered. In other words, the original destination address of the client may be overridden by the router, the proxy and DNAT as well. Zone access control uses only the final IP address, all interim addresses (set by the Router, Proxy, but not used as the final one) are ignored in the access control decision.

If a service uses an SNAT Policy, the Use client address as source is implicitly set as well so that SNAT uses the client IP address instead of the firewall IP address. That is, if the NAT policy does not include SNAT modification, the client's IP address is used even if the Use client address as source is unset in the router.

Tip

The versatility of NAT policies is especially useful in large-scale, enterprise deployments or where a lot of NAT is used.

6.7.11. Resolver policies

Resolver policies specify how a given service should resolve the domain names in client requests. This capability is essential when non-transparent services are used, as in these cases the PNS host has to determine the destination address, and the results of a name resolution are needed. Application-level Gateway is also able to store the addresses of often used domain names in a hash. Application-level Gateway supports DNS-based (DNSResolver) and Hash table-based (HashResolver) name resolution.

Resolver policies

Figure 6.78. Resolver policies


DNSResolver policies query the domain name server used by Application-level Gateway in general to resolve domain names. If a domain name is associated to multiple IP addresses (that is, it has more than one 'A' records), these records can be retrieved by checking the Return multiple DNS records checkbox. The DNSResolver policies also cache the domain names and the IP addresses found. (The DNS server used by the PNS host can be specified on the Resolver tab of the Networking component, see Section 5.3, Managing client-side name resolution for details.)

Tip

Retrieving multiple 'A' records is useful when Application-level Gateway is used to perform load balancing.

Example 6.17. Defining a Resolver policy

Python: Below is a simple DNSResolver policy enabled to return multiple 'A' records.

ResolverPolicy(name="Mailservers", resolver=DNSResolver(multi=TRUE))

HashResolver policies are used to locally store the IP addresses belonging to a domain name. A domain name (Hostname) and one or more corresponding IP addresses (Addresses) can be stored in a hash. If the domain name to be resolved is not included in the hash, the name resolution will fail. The HashResolver can be used to direct incoming connections to specific servers based on the target domain name. The HashResolver policies also cache the domain names and the IP addresses found.

Example 6.18. Using HashResolver to direct traffic to specific servers

If a PNS host is protecting a number of servers located in a DMZ, the connections can be easily directed to the proper server without a DNS query if the hostname – IP address pairs are stored in a HashResolver. If multiple IPs are associated with a hostname, simple fail-over functionality can be realized by using FailOverChainer.

The resolver policy below associates the IP addresses 192.168.1.12 and 192.168.1.13 with the mail.example.com domain name.

Defining a new HashResolver

Figure 6.79. Defining a new HashResolver


Python:
ResolverPolicy(name="DMZ", resolver=HashResolver(mapping={"mail.example.com":\
("192.168.1.12", "192.168.1.13")}))

6.7.12. Stacking providers

Stacking providers are external hosts managed by the Content Filtering (CF) performing various traffic analysis and filtering (for example, virus and spam filtering). These hosts can be listed as Stacking providers, and easily referenced from multiple service definitions.

A Stacking provider includes the IPv4 or unix socket of the host performing the analysis of the traffic. If multiple hosts are set in a single policy, they will be used in a fail-over fashion, that is, if the first host is down, the traffic will be directed to the second one for analysis, and so on.

Creating a new Stacking Provider through IPv4

Figure 6.80. Creating a new Stacking Provider through IPv4


Creating a new Stacking Provider through domain socket

Figure 6.81. Creating a new Stacking Provider through domain socket


To specify an IPv4 socket, the IP address and the port of the host have to be specified. To specify a unix domain socket, the full path (including the actual filename) has to be provided.

For details about CF, its configuration and the use of stacking providers, see Chapter 14, Virus and content filtering using CF.

6.8. Monitoring active connections

MC provides a status window to monitor the active connections of an instance (for information on instances, see Section 6.3, Application-level Gateway instances). Navigate to the Application-level Gateway MC component, select an instance, then click Active Connections.

Tip

Multiple instances can also be selected.

The Active connections window displays the following parameters of the active services within the instance:

  • Name: It provides the name of the service (for example, intra_HTTP_inter).

  • Proxy module: It provides the name of the proxy (for example, MyHttpProxy).

  • Proxy class: It identifies the proxy class from which the proxy module used in the service definition was derived (for example, HttpProxy).

  • Client address: It is the IP address of the client.

  • Client port: It is the port number of the client.

  • Client local: It is the IP address targeted by the client.

  • Client local port: It is the port targeted by the client.

  • Client zone: It is the zone the client belongs to.

  • Server address: It is the IP address of the server.

  • Server port: It is the port number of the server.

  • Server local: It is the IP address used by Application-level Gateway on the server-side (the server sees this address as client address).

  • Server local port: It is the port used by Application-level Gateway on the server-side (the server receives the connection from this port).

  • Server zone: It is the zone the server belongs to.

Note

The Proxy class and Proxy module parameters are empty for packet filter services.

Active connections

Figure 6.82. Active connections


The list of active connections is not updated real-time, it is only a snapshot. It can be updated by clicking Refresh now. The Jump to service displays the configuration of the selected service on the Services tab.

The Active connections window is a Filter window, thus various simple and advanced filtering expressions can be used to display only the required information. For details on the use and capabilities of Filter windows, see Section 3.3.10, Filtering list entries.

6.9. Traffic reports

PNS can automatically create daily, weekly, and monthly statistics about the transmitted traffic, and send them to an administrator or auditor through e-mail. The reports are in Adobe Portable Document (PDF) format. Note that these reports do not provide detailed statistics about every host on your network, rather they can be used to identify the most active hosts ("top-talkers") and to examine trends and sudden changes in the statistics (outliers).

In general, every section of the report consists of a table that details the ten most active clients (for example, the ten clients who transferred the most data in a zone) and a pie chart that displays every client. Note that on the pie chart, only the clients responsible for at least ten percent of the total value are labeled, all other clients are aggregated under the Others label.

Note

Every PNS host, and every node of a PNS cluster creates and sends a separate report. Reporting options must be configured on every PNS host separately.

The reports include the following information:

  • Network Traffic: It provides traffic statistics for the entire network.

  • Zone Traffic: It provides traffic statistics for every zone defined in PNS. Note that this report can be long if there are many zones defined.

  • Mail Delivery Traffic: It provides statistics for the total transferred SMTP traffic, as well as for the most active accounts. Top senders and recipients are listed separately.

  • Spam and Virus Reports: It provides statistics about spam and infected e-mails.

  • Access Control Reports: It provides statistics about connection-attempts that were blocked by PNS.

  • URL Reports: It provides a list of websites generating more than a set amount of traffic, and a list of their top visitors. By default, ULR Reports are not included in the regular reports, see Procedure 6.9.1, Configuring PNS reporting for details on configuring them.

6.9.1. Procedure – Configuring PNS reporting

  1. Login to the PNS host locally, or through SSH, and edit the /etc/PNS/reports/options.conf file. Alternatively, you can add this file to the Text Editor plugin of MC (see Procedure 8.1.1, Configure services with the Text editor plugin for details).

  2. Enter the e-mail address to the ADMINEMAIL field, where the reports should be sent to.

  3. Enter a name for the daily, weekly, and monthly report files into the TITLE_DAILY, TITLE_WEEKLY, TITLE_MONTHLY fields.

  4. By default, the reports do not include statistics about visited websites. To include statistics about visited websites and the clients who visited them, enter a positive number into the URLS field. URLs that generate traffic higher than this value in megabytes will be included in the reports. For example, URLS=1024 will include every website that generates at least 1 GB traffic. The direction of the traffic does not matter, uploads and downloads are considered together.

  5. Save the file.

Warning

The reports are only sent to the e-mail address set in /etc/PNS/reports/options.conf, they are not stored locally on the PNS host.

Tip

To generate reports manually for arbitrary time periods, use the report.py command-line tool. Run report.py without parameters to display the available options and parameters of the application.

Chapter 7. Logging with syslog-ng

Firewall design and implementation generally require the presence of a subsystem that is responsible for accountability. This subsystem stores information on user and administrator activities performed on or through the firewall and must be configurable to provide exactly the amount and type of information needed, which, on the other hand, is usually defined by the corporate IT Security policy.

This requirement in computer systems is typically fulfilled by a logging subsystem. Logging is the act of collecting information about events in the system. The result of logging can be the following.

  • one or more log files (binary or text-based files)

  • the console

  • rows in a database table if logging is set up to use a database as the output store

These log files are archived for event-tracking purposes. Apart from a simple, automated archiving procedure, however, these log files must be continuously analyzed. Logging records, (quasi) real-time user and system activities, that is, a continuous analysis, is capable of detecting malicious user activities or system malfunctions as they take place, thereby allow for an effective and quick response policy.

In security-sensible systems, such as the PNS firewall, logging must be a system–wide action, that is, all the relevant components of the system provide logging information, or log entries. There must be therefore a central component, a logging subsystem, that collects log entries from the various system components and organizes them into a log file of some format (the term file is used literally here and does not necessarily mean a text or binary file: it is simply the output of logging that can be a database, the console or some input queue on another machine). It would not be practical nor economical for all the components to manage their own log files. It would waste system resources (more open file handles, database connections or network sessions, depending on the output destination used) and would make analysis cumbersome (hunting for information in several separate log files that would probably be syntactically incompatible).

In PNS, centralization is elevated to a higher level: the logs of the central logging subsystem on all machines are unified on a network-wide dedicated central logging machine. This way, log analysis, reporting and archiving is simpler even for larger networks with many entities providing logging capabilities.

These requirements have, in part, long been targeted by the Unix syslog subsystem. For decades, syslog was the ultimate central logging subsystem for Unix and Linux machines. It is an old and rather unreliable technology and lacks some flexibility and security features that are required in today's demanding network and security environments. Therefore, a replacement service called syslog-ng, where "ng" stands for "next generation", has been created. The syslog-ng application inherited all the concepts, features and most of the naming conventions from syslog and enhanced it with several new features targeting flexibility and security requirements. PNS uses syslog-ng as its logging subsystem.

The following sections include a short introduction to the concepts of syslog-ng, and describe how MS works with syslog-ng and how this subsystem can be managed with MC.

7.1. Introduction to syslog-ng

The syslog-ng application runs as a daemon process and collects information from various log sources. Depending on the options and filters configured, syslog-ng saves the received log entries to the specified destinations. The configuration of syslog-ng mainly consists of configuring its components correctly.

The components of syslog-ng are the following:

  • Sources

  • Global options

  • Filters

  • Destinations

The syslog-ng configuration is stored in a text-based configuration file that is typically the /etc/syslog-ng/syslog-ng.conf file. MC hides the exact structure of this configuration file and takes care of the correct syntax, allowing the administrator to concentrate on the actual configuration tasks. However, as syslog-ng is present in more and more Linux/Unix distributions, it is beneficial to know the syntax and the content of this configuration file too. In addition, syslog-ng allows for centralized logging from machines not necessarily under the control of MS. In this case configuring syslog-ng means manually editing the corresponding configuration file.

The syslog-ng.conf file has a C-like syntax with curly braces ({}) separating integral code parts and with semicolons (;) for closing expressions. Comments begin with hashmark (#).

7.1.1. Global options

It is possible to provide syslog-ng with global options that affect all the logging commands, although they can be overridden, if required. For further details, see Section 7.2.2.3, Configuring destinations.

There are several different options available, some of them are especially useful when dealing with log entries coming from other machines on the network. By default, these entries are recorded using the sender host's IP address, but by using options use-fqdn(), use-dns(), chain-hostnames() and keep-hostname() it is also possible to look up the hostnames for servers generating log entries.

For a full list of the available options and their descriptions, see Appendix B, Further readings.

7.1.2. Sources

There are several system components that do not output log entries in a unified format or method. Some of them output to files, while others use a pipe, or use a unix-stream. Some can even be configured to use a certain output method. The syslog-ng application can accept log entries from these output methods too.

The syslog-ng application supports the following source types:

  • internal()

    The log messages of syslog-ng itself.

  • file()

    This source is for log entries from a special file, like /proc/kmsg.

    Note

    A file source cannot be an ordinary text file, for example, one generated by httpd. However, it is possible to feed syslog-ng with messages from such a file indirectly. For this, a custom script is required, for example, a script that uses tail -f to transfer messages from the desired logfile to the logger utility.

  • pipe()

    This source is for messages from a pipe.

  • unix_stream()

    This source is for log entries from a connection–oriented socket.

  • unix_dgram()

    This source is for log entries from connectionless sockets.

  • tcp()

    Log entries from remote machines that use TCP for log entry submission.

    Note

    One of the advantages of syslog-ng over traditional syslog is that it can handle TCP connections.

    By default, syslog-ng uses TCP port 514.

  • udp()

    Log entries for remote machines that use UDP for log entry submission.

    By default, syslog-ng uses UDP port 514.

  • systemd-journal()

    This source is for collecting messages from the systemd-journal system log storage.

The most important sources when dealing with local component's log entries are probably unix_stream() and unix_dgram(), because the main system components, like the kernel and many of the daemon processes as well use one of them for recording log events.

7.1.3. Destinations

The syslog-ng application can send log messages to the following types of destinations.

  • file()

    This destination is an ordinary text file. It is possible to use macros in filenames, thus you can create dynamic file names.

  • pipe()

    This destination is a named pipe.

  • program()

    This destination means the standard input of a given program.

  • syslog()

    Send messages to a remote syslog server specified by its IP address or FQDN using the RFC 5424 (IETF syslog) protocol over TLS, TCP, or UDP. The default destination port is 514.

  • tcp()

    Send messages to a remote syslog server specified by its IP address or FQDN using the RFC 3146 (BSD syslog or legacy-syslog) protocol over TCP or TLS. The default destination port is 601.

  • udp()

    Send messages to a remote syslog server specified by its IP address or FQDN using the RFC 3146 (BSD syslog or legacy-syslog) protocol over UDP. The default destination port is 514.

  • unix_dgram()

    This destination is a connectionless Unix socket destination.

  • unix_stream()

    This destination specifies a connection–oriented Unix socket as a destination for log entries, for example, /dev/log.

  • usertty()

    This destination sends log messages to the terminal of a given user. Username is given as a parameter of usertty.

7.1.4. Filters

To fine-tune what log entries are needed for or how they are forwarded to different destinations, it is possible to use filters in syslog-ng configurations. Although their usage is optional, they are highly recommended because they represent the real flexibility of syslog-ng.

Filtering can be defined to use seven different criteria that are summarized in the following list.

facility()

It filters the type of messages referring to the nature of the log entry. For example, auth, cron, daemon, kern, mail.

priority()

It filters the assigned priority level of the log message.

The possible priority levels are the following in the order of severity: none, debug, info, notice, warning, err, crit, alert, emerg.

level()

It is the same as priority.

program()

It is the name of the software component that generated the log entry.

host()

It is the machine that the log message arrived from.

match()

It is a regular expression that is compared to the contents of the log message.

filter()

It is an additional filter.

By combining these elements you can manually configure a fairly complex logging environment in a couple of lines of “code”, with basic knowledge on the syntax of syslog-ng rules. If you use MC, MC takes care of the correct syntax and allows you to focus on the actual rule creation process.

For more detailed information on syslog-ng, see Appendix B, Further readings.

7.2. Configuring syslog-ng with MC

7.2.1. Procedure – Configure syslog-ng

To configure system logging in Proxedo Network Security Suite complete the following steps.

  1. Select the host where you want to configure system logging, then click New.

  2. Choose a template for the System logging component.

    Selecting a syslog-ng template

    Figure 7.1. Selecting a syslog-ng template


    The following templates are available for the component:

    • Default: It collects logs from /dev/log and /proc/kmsg, and stores them in the /var/log/messages file.

    • Remote destinations: Type the IP address of your logserver into the Logserver IP field. It collects logs from /dev/log and /proc/kmsg, and stores them in the /var/log/messages file. The log messages are also sent to the logserver over TCP using the legacy BSD-syslog protocol (RFC3164).

    • Debian default: It collects logs from /dev/log and /proc/kmsg, and stores them in several different files, like the default syslog configuration on Debian systems.

    • Minimal: It defines an empty configuration.

  3. View this initial configuration file by selecting the system logging component of a given host and click the View Current Configuration button.

    Basic syslog-ng.conf file created from the system logging chroot template

    Figure 7.2. Basic syslog-ng.conf file created from the system logging chroot template


7.2.2. Configuring syslog-ng components through MC

The main configuration window of the system logging component consists of five tabs, corresponding to the five main components of a syslog-ng configuration.

Configuration tabs for the system logging component

Figure 7.3. Configuration tabs for the system logging component


The only tab that needs some explanation is the first one, Routers. This tab is used to assemble the actual log commands from the other components, so a router actually represents a log command entry in the syslog-ng.conf file. Therefore, during a configuration cycle, it is recommended to visit and configure the Routers tab last. First, configure the Global options tab to set up system-wide defaults, then you can continue with the Sources and Destinations tab.

7.2.2.1. Configuring global options

7.2.2.1.1. Procedure – Set global options

The Global options main tab contains three further sub-tabs for configuring the necessary parameters:

  • General

  • Permissions

  • Name resolutions

  1. Configure the parameters for I/O operation optimization.

    File I/O is always expensive in terms of system time needed, so theoretically the number of (log) write operations should be minimized, keeping a number of incoming log entries in a memory buffer and batch-write them out to disk.

    Note

    This buffer and thus the time between successive log write-outs shall not take too long because in case a hardware malfunction occurs and the machine has to be rebooted, the log messages that have not been written out yet are lost.

    Global syslog-ng options for file handling

    Figure 7.4. Global syslog-ng options for file handling


    Time-related parameters are given in seconds, message size is in bytes, while message queue size is an item number.

  2. Set system time usage.

    Macro substitution is possible in syslog-ng, for example when creating filenames. If you use system time as a macro variable, the default is to use local system time on the syslog-ng server that processes the log entries. If, instead, you want to use time values received in the log messages themselves, check the Use received time in macros checkbox.

  3. Configure the required parameters under General tab.

    The list of other configurable parameters in this tab includes the following.

    Message size

    It defines the allowed maximum size for log messages.

    Message queue size

    It defines the allowed number of messages waiting to be processed.

    Stats interval

    It sets the syslog-ng's internal reporting interval. The syslog-ng application reports a number of parameters on its own operations and statistics.

    Mark interval

    It sets the regularity of marking timestamps by the syslog daemon.

    Sync interval

    It defines how often log messages are written out from memory.

    The default '0' means there is no time delay, messages are written out continuously.

    File inactivity timeout

    It defines how long after the non-usage time the log files are closed.

    Reopen interval

    It sets how often a log file can be opened again.

    Bad hostname regexp

    This is a regexp which contains hostnames that should not be handled.

    Fraction digits of second

    The syslog-ng application can store fractions of a seconf in the timestamps according to the ISO08601 format. This parameter specifies the number of digits stored.

    Time zone

    By setting this parameter timestamps will be converted to the timezone specified here. This timezone will be associated with the messages only if no timezone is specified within the message itself.

    Receive time zone

    It specifies the time zone associated with the incoming messages, if it is not specified otherwise in the message or in the source driver.

    Send time zone

    It specifies the time zone associated with the messages sent by syslog-ng, if it is not specified otherwise in the message or in the destination driver.

    On error

    It controls what happens when type-casting fails and syslog-ng cannot convert some data to the specified type.

    Use received time in macros

    It specifies whether syslog-ng shall accept the timestamp received from the application or client sending it. If it is disabled, the time of reception will be used instead.

    Check hostname validity

    A check whether the hostname contains valid characters or not can be enabled or disabled.

    Use threads

    This parameter enables multithreading in syslog-ng.

  4. Assign owner and permission parameters on the Permissions tab to log files and directories created by syslog-ng.

    Permission settings for logfile creation

    Figure 7.5. Permission settings for logfile creation


    By default, syslog-ng runs as root, but can be configured to run as a limited user as well. In this case you have to set the appropriate permissions, or use the default values.

  5. Set name resolution for syslog-ng under the Name resolutions tab.

    Name resolution settings for syslog-ng

    Figure 7.6. Name resolution settings for syslog-ng


    Machine identification in log entries is accomplished by using IP addresses. If you want to use hostnames that are easier to remember and recognize, you can instruct syslog-ng to perform name resolution. This name resolution only works for resolving the IP addresses of hosts sending log entries.

    If there are IP addresses within the log messages themselves, they are not resolved this way. To perform name resolution for those addresses, a log analyzer utility is needed. Name resolution is a time-consuming process and to achieve the best results, use a DNS server that is “close” to the syslog-ng server in terms of response time.

    On the other hand, log entries are typically coming from a limited number of machines (servers) and their IP addresses tend not to change. Therefore, it is reasonable for the syslog-ng server to cache their resolved names locally, thus easing the heavy reliance on a DNS server.

    You can configure DNS caching as a global option, under the name resolution tab. The time values are in seconds, cache size is in bytes. File options can be changed in individual file destination configurations, but name resolution options cannot, they are always global.

7.2.2.2. Configuring sources

Sources are collections of communication channels where log entries can arrive. A source in syslog-ng can consist of one or more drivers. Driver is the actual communication channel that must be monitored for log messages.

By default, using one of the default templates, there is an internal driver for syslog-ng's own log messages, and there are three unix-stream drivers for /dev/log, bind, and ntp, respectively.

The default source is called base. To configure new drivers, either define them under this default source or create new sources. A source must always contain at least one driver.

7.2.2.2.1. Procedure – Create sources

  1. Create a source click on New.

  2. Provide a name for the source.

7.2.2.2.2. Procedure – Create drivers

  1. Click New on the Drivers subwindow on the Sources tab in System logging component.

    The following window appears.

    Adding a new source driver for syslog-ng

    Figure 7.7. Adding a new source driver for syslog-ng


  2. Select a driver type.

    The rest of the options are based on this selection.

    1. For unix_dgram, unix_stream, sun_stream and file driver types, set the filename.

      Note

      None of these driver types are ordinary text files. This file is a binary file while the others are socket endpoints. Nevertheless, they are identified by filenames.

    2. If you have a custom system component, for example, a daemon, that sends its log messages to a special socket and you want syslog-ng to collect this component's log messages, set up a driver for it. Many of the Linux daemons and other software components prefer /dev/log but it is not a central requirement. Some software applications can even be instructed with the help of the configuration file where to log.

    3. For TCP and UDP source drivers, specify an IP address and a port number.

      The machine running syslog-ng waits for log messages from other servers on this IP address/port pair. In other words, here you do not specify from where, that is, what machines the log entries arrive from, but rather on what IP address/port pair syslog-ng collects these log entries.

      The default port for both TCP and UDP is 514.

      For TCP drivers some additional parameters can be supplied.

      Configuring TCP source drivers

      Figure 7.8. Configuring TCP source drivers


      Since TCP is a connection-oriented protocol, a virtual session is always established between the communicating parties. This session buildup takes time and bandwidth (three-way handshake), therefore to save some of these resources, if a session is built between syslog-ng and the host sending log entries, it is kept alive with the help of keep alive messages. However, if the number of active TCP sessions is high, it can have negative effect on the performance of the host running syslog-ng. On the other hand, if the number of sessions is kept low, using the Connection limit setting, some log messages may be lost if the connection limit has already been reached.

      The Program override parameter enables replacing the ${PROGRAM} part of the message with the provided parameter string. The Flags parameter specifies the log parsing options of the source.

      Another small optimization setting is the Do not close during reload checkbox: it instructs the system not to close open TCP sessions while syslog-ng configuration is reloaded.

      These two settings are available for the unix_stream driver type as well.

      Additional parameter configuration options are as follows:

      • Use ancryption: If this option is enabled, a TLS-encrypted channel is used.

      • Certificate: It specifies the certificate used to authenticate the syslog-ng client on the destination server.

      • CA group: It specifies the CA group to verify peer certificates.

      • Peer verify: This option defines the verification method of the peer.

7.2.2.3. Configuring destinations

The logic behind destination configuration is the same as with sources' configuration. You can create one or more destination directives and fill them with drivers specifying the actual channels on which log messages are recorded.

The available destinations are the following:

  • file

  • syslog: TCP, UDP, or TLS-encrypted TCP through the RFC5424 (IETF-syslog) protocol

  • TCP through the RFC3164 (BSD-syslog or legacy-syslog) protocol, including TLS-encrypted TCP

  • UDP through the RFC3164 (BSD-syslog or legacy-syslog) protocol

  • pipe

  • program

    Program specifies the input of a software, typically a script, that can perform an action based on the input log message it receives.

    Tip

    This may be a good solution to set up an alerting mechanism.

  • Unix_dgram

  • Unix_stream

  • Usertty

    Usertty is a terminal to which log messages can be displayed; it is meaningful as a destination if there is someone actually monitoring that terminal, or the named terminal is routed to some other, special destination.

Configuring TCP and UDP destinations

Figure 7.9. Configuring TCP and UDP destinations


Host and Port define the destination of log messages on the network, while Bind IP and Bind port specify where (in which direction) the log messages are sent out from the host. This is especially important for firewalls which almost always have two or more interfaces and a number of IP addresses.

Additional parameter configuration options are as follows:

  • Use ancryption: If this option is enabled, a TLS-encrypted channel is used.

  • Certificate: It specifies the certificate used to authenticate the syslog-ng client on the destination server.

  • CA group: It specifies the CA group to verify peer certificates.

  • Peer verify: This option defines the verification method of the peer.

File destination also has some important properties that need some explanation:

Configuring a file destination driver

Figure 7.10. Configuring a file destination driver


Most properties are present also on the Global options tab: many of the global options can be overridden in file destination setup. For the descriptions of the properties, see section Section 7.1.1, Global options.

A special property is the Message template. This property can be used to apply some basic formatting on log messages: according to the example given in Figure 7.11, Macro substitution in file naming, all log messages that end up in /var/log/syslog have a timestamp ($ISODATE) and a hostname ($HOST) inserted before the actual log message ($MSG). This is the default behavior in PNS.

Note

The Message template property is not to be confused with macro substitution in filename creation. Message templates can format actual log messages, while macro substitution can format log file names.

For example, if you want to create a new logfile every day, modify the filename property in Figure 7.11, Macro substitution in file naming the following way.

Macro substitution in file naming

Figure 7.11. Macro substitution in file naming


7.2.2.4. Configuring filters

An optional component of syslog-ng configuration is filter creation. Filters can be used to pick log entries from defined sources with the possible intent of sending selected log entries to different destinations.

Example 7.1. Selecting log messages from Postfix using filter

The following is a trivial filter to select log messages coming from Postfix:

filter f_postfix{program(“postfix”);};

Filters can use regular expressions in a match criteria and a number of other criteria as well. For a complete list of criteria, see Section 7.1.4, Filters. Due to the flexible nature of filters, it is almost impossible to create a usable GUI to interface them. Therefore, the Filter tab of the System logging component is quite simple.

7.2.2.4.1. Procedure – Set filters

  1. Create one or more filters.

    See Section 7.2.2.2, Configuring sources and Section 7.2.2.3, Configuring destinations.

  2. Set up a rule for each filter in the Filter rule textbox.

    The Filter rule textbox

    Figure 7.12. The Filter rule textbox


    MC aids in filter creation by taking care of the necessary curly braces ({}) and semicolons (;).

    To create a syntactically correct postfix filter, enter the following details to the filter rule textbox:

    program(“postfix”).

For further information on possible filters, see Appendix B, Further readings.

7.2.2.5. Configuring routers

Logging rules are called Routers in syslog-ng terminology. Rules consist of a source, optionally a filter and a destination. The Routers tab of the System logging component represents this philosophy well.

Configured routers

Figure 7.13. Configured routers


Just like sources, destinations and filters, more than one router can be present in the system. If you use several routers, it is recommended to apply a good naming strategy to easily identify the relevant log rules.

7.2.2.5.1. Procedure – Configure routers

  1. To create a router click on New.

  2. Provide a name for the new router.

  3. Select the components from the list of available sources, filters and destinations.

    Selecting router components

    Figure 7.14. Selecting router components


    Example 7.2. Setting up a router

    If you select base as the source, postfix as the filter (optional, use the small arrow between the text boxes) and syslog as the destination, the resulting router, that is, the log rule looks like this:

    log { source(s_base); filter(f_postfix); destination (d_syslog);flags ( ); };

  4. Specify flags for the router.

    The flags component is empty. Use the checkboxes at the bottom to select the flag. Three possible flags are available.

    • Final flag means that the processing of log statements ends at this point.

      Note

      Ending the processing of log statements does not necessarily mean that matching messages are stored once, as there can be matching log statements processed prior to the current one.

    • Fallback flag marks a log statement for 'fallback'. Defining a fallback statement means that only those messages are sent which do not match any 'non-fallback' log statements.

    • Catchall flag means that the source of the message is ignored, only the filters are taken into account when matching messages.

    Configuring flags for routers

    Figure 7.15. Configuring flags for routers


There are virtually endless possibilities for configuring a complex system logging architecture with syslog-ng. This chapter focused only on the basic concept and provided an architecture view including not only PNS and the MS host nodes, but presenting as well that the syslog-ng architecture can also include practically Unix/Linux machines.

For further information and details, see The syslog-ng Administrator Guide.

7.2.3. Procedure – Configuring TLS-encrypted logging

Purpose: 

To encrypt the communication between the PNS host and your central syslog server, complete the following steps.

Steps: 

  1. Navigate to System logging > Destinations > New, and enter a name for the new destination (for example, tls-logserver).

    Creating a new syslog destination

    Figure 7.16. Creating a new syslog destination


  2. Select Drivers > New, then Driver type > tcp.

    Configuring the syslog destination

    Figure 7.17. Configuring the syslog destination


  3. Set the Use syslog-protocol option to enabled, if you want the messages to be formatted according to the new IETF syslog protocol standard (RFC5424).

  4. Set the hostname and the port of your logserver in the Host and Port fields.

  5. Select the network interface of PNS that faces the logserver from the Bind IP field.

  6. Select Use encryption.

  7. If your logserver requires mutual authentication, that is, it checks the certificates of the log clients, select the certificate PNS should show to the logserver from the Certificate field.

  8. Select the trusted CA group that contains the certificate of the CA that signed the certificate of the logserver from the CA Group field.

  9. By default, PNS will verify the certificate of the logserver, and accept only a valid certificate. It is possible to have less strict criteria by modifying the Peer verify option. For details on the possible values, see Section 3.2.5, Certificate verification options in Proxedo Network Security Suite 2 Reference Guide.

  10. Click OK.

  11. Select the Router tab, add a new router and name it, for example, to TLS.

    Configuring the syslog router

    Figure 7.18. Configuring the syslog router


  12. Add the earlier defined new destination to this router.

Chapter 8. The Text editor plugin

All the essential parts of PNS configuration have a corresponding custom configuration component in MC. In general, auxiliary services that are not part of the default PNS configuration but installed separately, for example, a Snort Intrusion Detection System (IDS) service, MC provides a generic tool, the Text editor plugin configuration component to edit text-based configuration files. Since practically all Unix services work with text-based configuration files, the Text editor plugin can be used to configure all of them remotely.

Although the Text editor plugin provides only a simple text editor functionality, if you use this tool to configure the custom services, they are placed under the control of MS. From MC, the configuration changes are first committed to the MS server and stored there. After an upload command, these configurations are handled by the vms-transfer-agent on the corresponding PNS firewall. So any service installed on the PNS host are, in effect, MS-managed.

The number of Text editor plugins is not limited, you can add as many of them as needed. Furthermore, a single plugin can be used to edit more than a single text file.

8.1. Using the Text editor plugin

The Text editor plugin in MC is called Text Editor. It can be added to the list of configuration components the usual way.

8.1.1. Procedure – Configure services with the Text editor plugin

  1. Select a configuration template.

    By default, there are some predefined configuration templates available, these are actually configuration file skeletons for the given services.

    Default templates for the Text editor plugin

    Figure 8.1. Default templates for the Text editor plugin


    Unless you want to configure DNS or NTP services, or other predefined text components, select the Text editor minimal template.

    The following configuration window appears.

    Configuration window for the Text editor plugin

    Figure 8.2. Configuration window for the Text editor plugin


  2. Provide the name of the Text component.

    This parameter sets a label for the component.

  3. Provide the name of the Component configuration file.

    Component configuration file name refers to the actual configuration file on the host that you want to edit.

    It is most likely a file in /etc or in one of its subdirectories since this is the default location for configuration files.

  4. Set the Component systemd service name.

    Component systemd service name refers to the name of the systemd service that is used to start, stop, restart, reload or check the status of a given service. Without providing this option it is not possible control the service from MC.

    Note

    MC runs the /bin/systemctl start/stop/restart/reload/status command with the given parameter.

  5. Click OK after setting the necessary parameters.

    The main window of MC reappears with an empty component pane. This is normal. Before working with the desired configuration file it is possible to download the content from the selected host.

  6. Download and edit content from the selected host.

    Use the following button in the button bar to download the file to MC:

    You can edit the configuration file.

  7. Propagate the modifications to the firewall.

    The steps are identical to that of the other components: commit the edited file to MS and then upload it to the host. Finally, the service can be started/stopped/restarted or reloaded the usual way.

Besides the file download option, the Text editor plugin provides some unique features compared to the other components.

8.1.2. Procedure – Use the additional features of Text editor plugin

  1. To simplify working with configuration files, you can insert precreated configuration file segments with the help of the Insert file button on the button bar.

    Note

    The file to be inserted must be present on the MC machine. If you insert more than one files, they appear concatenated in the main workspace.

  2. You can create new files with the New button on the button bar.

    If you create more than one file, they appear on different tabs in the main workspace of the Text editor plugin. This way a single plugin can host several different files.

    Note

    If you create a file with the Text Editor component, MC sets the permissions of the file and the directory to root:root 700, regardless of the original permissions or the umask settings of the system.

  3. You can remove configuration files with the Remove button.

    By default, the Remove button only removes the configuration file from the MS database. If you want to remove the file from the host as well, check the appropriate checkbox in the Remove dialog box.

    The Remove file dialog box

    Figure 8.3. The Remove file dialog box


  4. You can also place links to linkable objects of the configuration in a file, if you right-click in the main workspace of the Text editor plugin and select Link... from the local menu.

    The local menu of the Text editor plugin

    Figure 8.4. The local menu of the Text editor plugin


Note

There is no restriction on what file can be administered via the Text editor plugin except that it must be a text-based file.

Chapter 9. Native services

The default PNS installation includes some service components that are typically useful in networking environments. These are BIND for DNS traffic, NTP, and Postfix for SMTP traffic. The use of these services is not mandatory, however, they can help solving three particularly problematic issues of network configuration. Once configured, access to these services (so called local services), as well as remote SSH access to the PNS host must be enabled separately by adding a local service. See Section 9.4, Local services on PNS for details.

For enhanced security, some of these services run in a chrooted environment, otherwise known as a jail. A jail is a special, limited directory structure where the service executables and all the accompanying files, such as configuration files, are installed. The service that is jailed can only access this limited part of the file system hierarchy and is unaware of the rest of the file system. The 'chrooted environment' is a virtual subtree of the full file system, and the top of this subtree is seen by the chrooted (jailed) service as the root '/' directory. From the service's point of view, the jail is a complete file system in the sense that it contains all directories the service needs access to. For example, if the service needs a library from /lib or from /usr/lib, these two directories together with the actual lib files needed are included (copied) in the jail environment. The jail isolates the service from the rest of the system, so even if its security is compromised, it can only destruct the jail and cannot affect the rest of the system.

Note

The BIND and NTP services do not run in jail, but use AppArmor instead. It is possible to set the BIND service to run in jail automatically.

9.1. BIND

BIND is the industry standard DNS solution used in the vast majority of Linux/Unix based name resolution services. It has three different “branches”, ISC BIND 4, 8 and 9, the most advanced development taking place in the ISC BIND 9 branch.

Installing PNS automatically installs an ISC BIND 9 version. BIND shipped with PNS is a full implementation of the most up-to-date version of the 9 branch, so theoretically it is possible to configure it for any DNS server role. It is hosted on a firewall, however, such liberal use of BIND is not recommended. Instead, it should rather be used as a limited-purpose DNS server. The following two examples are such possible configurations.

9.1.1. BIND operation modes

Example 9.1. Forward-only DNS server

In this scenario, BIND does not store zone information of any kind, instead, it simply forwards all name resolution requests to a designated nameserver located elsewhere. This way, BIND configuration and maintenance is minimal while name resolution traffic is optimized: BIND caches resolved name-to-IP address mappings, thereby saving some bandwidth and improving name resolution speed.

This setup is especially recommended for small to medium-sized networks where DNS zone information of the company is maintained off-site, typically at an ISP, and thus maintaining a dedicated nameserver only for Internet name resolution is not economical.

In this setup BIND operates essentially as a DNS proxy.

Example 9.2. Split-DNS implementation

In this setup two sets of records on the DNS server are maintained:

  • a public set which is available for general access, and

  • a private set that is available for internal users only.

With this setup it is possible for a company to both maintain its own public DNS zone records (SOA, NS, MX and A records for hosts running popular services like WWW or FTP) and some internal DNS records for servers that are (and must be) available for internal users only.

This setup is recommended for companies wishing to host their own DNS zone database but the number of external name resolution requests does not facilitate the use of a dedicated DNS server.

9.1.2. Configuring BIND with MC

The configuration of BIND is stored under /etc/bind/, where the most important file is named.conf. This is the general configuration file. Zone database information is stored in db.domainname files under the same directory.

MC does not offer a dedicated component to edit BIND configuration, instead, the Text editor component offers preconfigured templates for this task.

9.1.2.1. Procedure – Configuring BIND with MC

  1. Add the Text editor as a new component.

  2. Select a template to be used with the Text Editor.

    Selecting a Text editor template

    Figure 9.1. Selecting a Text editor template


    Select one of the first two templates depending on whether your want a split DNS configuration or not.

    Click OK.

  3. Configure the basic settings in the opening window.

    Configuring basic BIND settings

    Figure 9.2. Configuring basic BIND settings


    1. Provide the Domain Name Service name.

      This parameter simply specifies a label for the component that appears in the components pane.

    2. Specify Query source.

      This parameter defines where the outgoing name resolution requests originate on the firewall.

      Note

      Prior to BIND 8.1 the source port was 53 (just like the destination port), but since then BIND uses a port from the dynamic range, 5300 by default.

      This might be important in back-to-back firewall configurations where there is another firewall in front of this instance of PNS. To allow outgoing DNS requests, the front firewall must know the source port used by the BIND service.

      Besides supplying an alternate port number, you can supply a fixed IP address of PNS if it has more than one in the required direction. If this setting is not relevant in your network environment, choose the IP address of the outside interface.

    3. Define Forwarders.

      In a PNS installation, BIND is usually configured as a forward-only nameserver. If you configure a forwarder, BIND does not resolve names recursively on the Internet, but instead it forwards all name resolution requests to the DNS server specified as the forwarder.

    After entering values for these parameters the first round of BIND configuration is ready, a functional forward-only nameserver is in place.

  4. To permit access to the BIND service, enable the dns local service. If you plan to host zone database information on the PNS Gateway, enable the dns-zonetrans local service as well. See Section 9.4, Local services on PNS for details.

    Note

    If you use zone transfer, be careful with selecting which zones you accept zone transfer requests from.

If you go back to the DNS configuration component you can see the structure of the named.conf file with the values entered for query-source and forwarders.

named.conf in MC

Figure 9.3.  named.conf in MC


In addition, this tool is suitable for editing the named.conf file only. If you want to host zone database files (db.domainname) on the firewall, you can edit the files separately if you create a new file in the Text Editor component. However, for forward-only nameserver configurations editing the named.conf file is generally sufficient.

9.1.3. Procedure – Setting up split-DNS configuration

Setting up a split DNS service is useful in networks where both external and internal name resolution is performed using the same DNS server – in this case the PNS firewall. Since hiding the internal namespace from external visitors is a basic security requirement, you have to set up the DNS service in a way that it does not resolve internal names for external resolvers. In other words, for all DNS zones stored on the server you have to specify which networks can query for records in the given zone.

  1. Add the Text Editor component.

  2. Select the split-dns template.

    Two skeleton files are created, a named.conf and a named.conf.shared.

    The named.conf.shared file holds records and configuration settings that are shared between external and internal name resolution operations, while named.conf has options to specify internal and external networks (internalips and externalips). These networks can then be referenced in db.domainname file(s) to specify which networks can have access to what records.

    For more information on split-dns configuration and DNS configuration in general, see Appendix B, Further readings.

9.2. NTP

Accurate timekeeping is very important for firewalls. Without reliable time data, security log analysis is very difficult and can lead to false results. Besides, if the firewall provides any auxiliary services that use timestamping, for example, a mail service, false time values can be disturbing, too. One of the few things that are still operating in the original, free and liberal spirit of the Internet is the Network Time Protocol service. There are a number of timeservers on the Internet that allow free connections from anywhere. For a complete list, see Appendix B, Further readings.

Companies typically connect to a Stratus 2-level timeserver on the Internet and then distribute time within their organization from a single time server source. Using the native NTP service, PNS can function as a central timeserver for the entire organization, if needed. This service generally does not put a heavy load on the machine, nor does it pose a significant security risk, so it is generally acceptable to use PNS as a timeserver for the internal machines.

Unlike application proxy plugins, native services operate as feature-complete software components, so the NTP component in PNS is a real NTP server. NTP is generally not suitable for proxying, since the latency of the proxy component would not be constant but load–dependent rather. Packet filtering could work for NTP but application-level handling of traffic generally offers a higher level of security. NTP is handled by a native service based on these reasons.

9.2.1. Procedure – Configuring NTP with MC

NTP has a separate configuration component in MC.

  1. Add the Time component to the PNS host in MC. It is recommended to select the Date and time template.

    Adding the Time component

    Figure 9.4. Adding the Time component


  2. Name the component.

  3. Specify a time server with which PNS synchronizes its system time.

    Selecting a time server to synchronize with

    Figure 9.5. Selecting a time server to synchronize with


  4. Configure the Time subcomponents. Date and time can be updated from the specified time server by clicking Run now, or set manually.

    The Time component

    Figure 9.6. The Time component


    1. Set date and time information manually, using the three-dotted (...) button.

      Editing time and date values manually

      Figure 9.7. Editing time and date values manually


      If the date and time values are set manually, a dialog box gives a warning that instead of setting these values manually, it is recommended to force an update from the configured timeserver with the ntpdate command. To synchronize the time to the time server, click the Run now button in the bottom of the window.

      It is recommended to use ntpdate instead of manually tuning date and time values because the Internet time is probably more accurate than other, local time sources.

    2. If you want to permit your clients to synchronize their clocks to the PNS host, enable the ntp local service. See Section 9.4, Local services on PNS for details.

    3. List the NTP servers PNS can communicate with.

      For time synchronization fault tolerance, It is recommended to add at least two servers to the list. Click New and enter the address of the NTP server, as well as the interval when the clock of the PNS host should be synchronized to the NTP server.

      Adding a new NTP server

      Figure 9.8. Adding a new NTP server


9.2.2. Status and statistics

The status of the configured time servers is indicated by leds of different colors before the name of the server. Hovering the mouse over a server displays the following statistics about the connection in a tooltip (for details on these values see Appendix B, Further readings):

  • Tally code:

  • Ref. ID: It is the reference ID (0.0.0.0 if unknown).

  • Stratum: It is the place of the server in the 'clock strata' hierarchy. Stratum 1 systems are synchronised to an accurate external clock; stratum 2 systems derive their time from one or more stratum 1 systems, and so on.

  • Type: It is the type of the peer (local, unicast, multicast or broadcast).

  • Last arrived: It is the time value when the last packet was received.

  • Polling interval: It defines the period between two queries in seconds.

  • Reachablility: It is a register indicating the reachability of the peer. A peer is considered to be reachable if at least one bit in this register is set to one.

  • Delay: It is the current estimated delay in milliseconds.

  • Offset: It is the current estimated offset in milliseconds.

  • Jitter: It is the estimated time error of the peer clock (measured as an exponential average of RMS time differences).

  • Status: The status displays the syncronisation status of the NTP server.

9.3. Postfix

SMTP mail handling in PNS is very flexible and is designed to allow for as many different e-mail “needs” as possible. Based on the size, profile and security requirements of a company, there are a number of configurations possible for handling email traffic.

Very small companies trust their ISP to host SMTP service for them and only connect with a mail retrieval (post office) protocol (POP3 or IMAP) to download the mail and use the ISP's mail server as their outgoing SMTP server. Larger companies may have their own SMTP server but still use the ISP's mail server as their official mail exchanger and only relay mail between the two. Companies that need maximal protection, have a fully functional, DNS-registered mailserver. The next level of security for companies can be achieved by sophisticated mail routing architecture, multiple domains and complex email traffic rules.

PNS aims to provide protection support for all types of SMTP requirements. It has a proxy class for SMTP that is the primary tool for handling SMTP traffic. It is not a fully functional mail server but a fully transparent filter module rather. It does not send and receive SMTP mail messages and it does not have a local mail store either. This proxy can interoperate with antivirus software for filtering viruses in SMTP traffic. With the SmtpProxy or a customized, derived version of it most SMTP firewalling needs can be fulfilled.

There are, however, cases when simply proxying SMTP traffic is not enough and some more intelligent mail handling procedure is required due to the organization's special needs.

Example 9.3. Special requirements on mail handling
  1. If a company maintains multiple mail domains and/or complex mail routing rules are needed using transport tables.

  2. If a company aims to avoid time-outs when antivirus filtering is enabled and large attachments need to be scanned. SmtpProxy will only accept (acknowledge) a mail message after it has arrived and has been scanned for viruses unlike most MTAs, which may lead to timeout situations when communicating with other, real MTAs on the Internet.

For such cases PNS installs a fully functional Postfix service besides the SmtpProxy. It is fully functional and virtually, any setups and configurations possible with a Postfix mail server, are also possible here. It does not mean that PNS shall be operated as a generic mail server for users, however, sophisticated SMTP configurations are possible with it.

Note

By default, PNS does not install a mailbox protocol server program, because a firewall should not run a POP3 or IMAP server.

The Postfix component can also provide SMTP delivery service for local services, and similarly to syslog-ng and other services, it has to be able to send e-mails. The local delivery of e-mails, however, shall not be allowed, if possible.

Note

The Postfix native service is not intended to replace the SmtpProxy application proxy in SMTP–handling configurations.

Even if the configuration options of SmtpProxy are not adequate, it is still recommended for the SMTP mail service handling to be 'front-end' at the firewall, which, after proxy-level filtering, passes SMTP traffic to the Postfix service.

As the possible uses of the Postfix component are so versatile, it is not possible to cover even the most typical ones in this chapter. Nor is it a firewall administrator's task to set up a complex mail routing architecture. Therefore only a brief introduction of the configuration interface is presented. For more information and details on Postfix, see Appendix B, Further readings.

9.3.1. Configuring Postfix with MC

You can accomplish Postfix configuration through a set of configuration files represented by the Mail transport component. This plugin has five tabs, corresponding to configuration files in /etc/postfix on the firewall.

9.3.1.1. Procedure – Configuring Postfix with MC

  1. Add the Mail transport component to the PNS host in MC. Select a template suitable for your needs, for example, the Mail transport default template.

    Adding the Mail transport component

    Figure 9.9. Adding the Mail transport component


  2. Open the configuration tabs.

    Configuration tabs in the Mail transport plugin

    Figure 9.10. Configuration tabs in the Mail transport plugin


  3. Specify parameters in the General tab.

    1. Provide My domain.

      It specifies the DNS domain of PNS which, in turn, defines what domain it receives mail for. Receiving mail for other domains is also possible. For details, see Appendix B, Further readings for a reference on mail administration.

    2. Enter My Hostname.

      It is the name of PNS, exactly as it is registered in DNS. The MX record in DNS must point to this name, so it is important to specify it correctly.

    3. Provide My networks.

      It specifies what IP networks Postfix accepts outgoing mail from, in other words, for which networks it acts as a mail relay.

      Note

      Unless explicitly required by your networking requirements, do not to list all your internal networks. It can result in all your hosts being able to send mails individually and directly, which might not be optimal from security point of view. For example, viruses usually contain an SMTP component for sending mail that should not be let through the firewall.

      If you only have a single mail server for handling external SMTP messages, list the mail server's single IP address. Correspondingly, list only those network interfaces of PNS as Listen interfaces, on which you want to handle incoming mail traffic.

    The rest of the parameters on the General tab are more special settings and their use depends on the configuration needs.

    Essential components of Postfix configuration

    Figure 9.11. Essential components of Postfix configuration


  4. Configure settings on the Master tab.

    The Master tab

    Figure 9.12. The Master tab


    Configure the settings if you have a Mail Scanner or Amavisd-new–based antivirus solution.

    The Master tab of the Mail transport component corresponds to the /etc/postfix/master.cf file.

  5. Configure settings on the Maps tab to add transport and virtual maps to Postfix.

    The Maps tab

    Figure 9.13. The Maps tab


    In order to route incoming mail from PNS to different, internal mail domains, an SMTP transport map can be provided, with the IP address of the real, internal mail servers serving the given mail domains.

  6. Configure the Checks tab.

    The Checks tab

    Figure 9.14. The Checks tab


    This tab covers two Postfix configuration files, /etc/postfix/header_checks and /etc/postfix/body_checks. The method of the address checking can be either hash or regular expression (regexp). This can be selected from the Lookup table type combobox.

  7. Configure the Access tab.

    The Access tab

    Figure 9.15. The Access tab


    In parallel with Checks, this tab covers /etc/postfix/recipient_access and /etc/postfix/sender_access.

  8. To permit access to the Postfix service, enable the smtp local service. See Section 9.4, Local services on PNS for details.

    Note

    Choose the zones that are allowed to access the Postfix service carefully.

Together these tabs, actually, the files they directly correspond to, make up the majority of typical Postfix configuration.

As you see, it is technically possible to create a full featured SMTP server on the PNS machine, but it is definitely not recommended.

Note

It is recommended to use the SMTP proxy to perform mail proxying on your PNS firewall. The Postfix component shall only be used, for example, in case of complex mail routing requirements.

9.4. Local services on PNS

Local services run on the elements of the PNS Gateway System: on PNS, MS, and CF hosts. PNS hosts can provide the following services locally:

Warning

Local services can be accessed only by using IPv4. IPv6 access for local services is currently not supported.

  • ssh: It enables remote SSH access to the PNS host. It opens port TCP/22.

  • smtp: It enables the transport of SMTP (e-mail) traffic. This local service must be enabled if you want to use the native Postfix service of PNS to handle e-mail transfer (see Section 9.3, Postfix). It opens port TCP/25.

  • nagios-nrpe-server: It enables nagios-nrpe-server to query the PNS. This local service must be enabled if you want to monitor your PNS with Nagios (see Procedure 17.3, Monitoring PNS with Nagios. It opens port TCP/5666.

  • munin-node: It enables Munin to query the PNS. This local service must be enabled if you want to monitor your PNS with Munin (see Procedure 17.1, Monitoring PNS with Munin. It opens port TCP/4949.

  • ntp: It enables clients to synchronize their system clocks to the clock of the PNS host using NTP. This local service must be enabled if you want to use the native NTP service of PNS (see Section 9.2, NTP). It opens port UDP/123.

  • identreject: If it is enabled, PNS rejects every traffic arriving to the 113/TCP port.

  • dns: It enables clients to use the PNS host as a DNS server. This local service must be enabled if you want to use the native BIND9 service of PNS (see Section 9.1, BIND). It opens port UDP/53.

  • dns-zonetrans: It enables clients to use the PNS host as a DNS server. This local service must be enabled if you want to use the native BIND9 service of PNS and enable zone transfer (see Section 9.1, BIND). It opens port TCP/53.

  • MSgui: It enables administrators to connect to MS with MC, and manage the PNS Gateway System. It opens port TCP/1314.

  • MSengine: It enables communication between MS and the PNS hosts. This local service must be enabled if a host is managed from MS. It opens ports TCP/1311 and port TCP/1313.

  • MSagent: It enables communication between the PNS hosts and MS. This local service must be enabled on the MS host. It opens ports TCP/1310 and TCP/1312.

Note

PNS automatically enables the services required for the management of the host: MSagent for PNS hosts; MSgui and MSagent for MS hosts. It is recommended to allow SSH as well.

Local services can be managed on the Services tab of the Management Access MC component. For every local service, the Name, the used Port or ICMP type), the Protocol (TCP, UDP or ICMP), and the Target parameters are displayed. If the value for the Target parameter is ACCEPT for a local service, the service is permitted, if the vaue is REJECT it is denied. To enable access to a local service on a host, complete the following steps.

9.4.1. Procedure – Enabling access to local services

  1. Navigate to the Services tab of the Management Access MC component of the host and click New.

  2. Select the service from the Local service combobox. The port number and the type of the protocol is set automatically. Modify them only if you have a special configuration.

  3. Select the zones permitted to access the service, then click Ok. The packet filter rule corresponding to the new service is automatically added to the ruleset.

  4. To activate the new service, do not forget to Commit and Upload the changes to the host, and to Reload the component using the Control service icon.

Chapter 10. Local firewall administration

PNS, in cooperation with the MS and MC software components, is designed to be fully configurable from the graphical user interface of MC. Though this graphical administration is definitely the preferred method of management, it is possible to manually accomplish all the management and configuration tasks using a simple, character–based terminal console connection. In addition, the console–based administration provides some useful tools for troubleshooting scenarios that are not available through MC.

Local firewall administration, in this sense, does not necessarily refer to administration that takes place physically at the firewall machine using its local console and keyboard, but it also refers to setups where the character terminal of the firewall is reached through a secure network connection using SSH. The described administration is local in the sense that the configuration files are directly manipulated on the firewall machine, and not through the MS database.

Note

MS reads the configuration files of the firewall host only once, when it is bootstrapped. For details, see Chapter 4, Registering new hosts. After that, configuration changes are only downloaded to the host with the help of the transfer agent and are not parsed again by MS. Therefore, if you make local changes to a configuration file which is otherwise managed by MS, your configuration changes are overwritten when you next issue an Upload command from MS.

Configuration files that are not managed by MS, for example custom installed services on the firewall for which you do not define a Text Editor plugin, are not affected by this rule.

10.1. Linux

The components of PNS Gateway run on Ubuntu-based operating systems, currently on Ubuntu 22.04 LTS. Therefore, for local management, the tools and procedures available are more or less the same as for any other Linux installation. If you install the PNS host using the installation media of BalaSys, only the most essential tools and services are installed by default. Although it is technically possible to install almost any kind of Linux software on a PNS host, it is not recommended because custom installed tools may have a negative effect on the system in terms of security and stability. In fact, all local administration tasks can be accomplished by using the software tools that are available by default.

Linux is a technically sophisticated operating system that allows full access to and total control over itself for system administrators. Therefore, it is vital to have the sufficient skills and experience before administering it locally in a production environment.

Note

Untrained or inexperienced use can render any Linux system inoperable quickly, therefore extreme care is required when performing local administration.

This chapter is by no means intended to be a Linux tutorial. If you are unfamiliar with the concepts of general administration of Linux, consult some form of documentation before proceeding. There is excellent documentation available for Linux, both in printed and online forms. To avoid scattering of resources and material, the Linux Documentation Project (LDP) which manages the documentation tasks of Linux, provides a central access point to a thematically organized set of Linux documentation. Visit LDP site ( http://www.ldp.org) for more information. Although the primary language of documentation is English, a substantial part of the material is either translated or is in the process of translation to other languages. If you prefer traditional, paper–based documentation forms, books on Linux administration are also available from major publishers worldwide. For more information, see Appendix B, Further readings.

10.2. Login to the firewall

A basic rule of working with any operating system is that only tasks that require system administrator (root) rights must be performed using a system administrator's account. All other tasks must be performed as a normal, non-privileged user. This is especially true for security-sensitive environments, such as a firewall.

Therefore, for every administrator of the firewall who needs local access, a normal user account must be created. Even if a firewall is managed by a team of administrators, typically only senior–level staff must be provided local logon rights. To preserve accountability and to maintain an adequate level of security, a separate account must be created for each administrator. These accounts are normal user accounts with no special privileges. However, to perform administrative tasks, special, root level access is needed to the services involved in the administrative action.

Linux provides the sudo tool to grant root level access to dedicated normal users. sudo allows the users to run only certain commands (specified by the root user) as root.

The sudo utility provides a secure method of having root level access to parts of the system. It is especially useful if there is more than one user who is potentially local administrator. To use sudo, all users who need root access must be listed in the file /etc/sudoers. The sudo command then allows these users to run only certain commands (specified by the root user) as root. This selective method is more secure than providing full root level access for a user to all parts of the system. It also allows for a more granular control over user activities on the system. sudo has configurable options as well as default settings; more information on these can be found in its manual pages (man sudo).

Tip

You can access Linux/Unix manual pages easily from Windows environment as well. If you have a Mozilla or Firefox browser, type man sudo in the address line and the browser opens immediately the sudo project's homepage.

Local administration of the firewall can be accomplished through either the local console itself, that is, physically sitting in front of the machine, or through a terminal session over the network. This network session obviously needs to be encrypted. PNS uses the industry–standard SSH protocol to accomplish this. SSH is a client–server protocol that provides an encrypted communication channel between the parties. PNS contains a native SSH server implementation, and SSH clients are freely available for most major operating systems. If you do not have one already installed on your system, visit http://www.freessh.org for a list of both free and payware SSH clients.

To establish an SSH session, the client must first authenticate itself to the server. A number of different authentication methods are defined in the SSH protocol standard. Currently, SSH version 2 is the latest standard version.

The simplest authentication method is password authentication: the user logging in to the SSH server provides a username/password pair that is a valid user in the server machine. The advantage of this method is its simplicity: no special configuration is needed and the user must know the username and password on the server anyway.

A more secure method of SSH login is the public key-based authentication: in this case the user possesses a public/private key pair and the server has a copy of the user's public key. The logon procedure using public keys is the following:

  1. Using the private key, the user initiates a logon to the server with the help of a session identifier.

  2. The server checks whether it has the matching public key for the user and grants access if both the key is found and the signature is correct.

The advantage of this method is enhanced security. No username and password have to be provided and the entire authentication session is secured with public key cryptographic procedures. The disadvantage is the extra setup needed: you have to generate user key pairs, place public keys of users on the server and the user has to carry the private keys to all client computers the user wishing to log on from, and upon leaving the client, the user has to make sure to remove the private key from the computer. Although, the more complicated it is to set up this method of authentication, the more preferred it is, due to the enhanced security it provides.

By default, PNS allows password-based SSH logins, too. During the setup it has to be decided whether the root user can log in to the system through SSH connection. It is recommended not to allow roots to login through SSH for security reasons. First an SSH session has to be established as a normal user and then perform a sudo action to gain special access permission for accomplishing administrative tasks.

For more information on SSH and its configuration, see Appendix B, Further readings.

10.3. Editing configuration files

Local system configuration is performed by editing the appropriate configuration files and then reloading or restarting the corresponding service(s).

All important system components, such as daemons, services, have their own configuration files, some have more than one. These files are generally stored under the /etc directory. There are exceptions from this rule, of course, but the majority of configuration files are in that directory. Some files are directly stored under /etc, but most services store the configuration files for the services in a subdirectory. For example, PNS stores a number of configuration files under /etc/PNS.

The configuration files are usually plain-text ASCII or XML files, and can be edited with any text editor. By default, the following text editors are installed: joe, nano, and vi.

Tip

Before editing configuration files make backup copies, for example, using the following command:cp filename filename.bak

Warning

PNS replaces the configuration file of several services with a symbolic link that points to a configuration file that is maintained by PNS. Do not edit such files directly, because the changes will be automatically removed at the next upgrade to a new version of PNS.

To edit such files properly, first break the symbolic link, and replace the broken link with a file. Following that, the file can be edited. The list of files replaced with symbolic links is the following:/etc/apparmor.d/abstractions/base_reduced, /etc/apparmor.d/abstractions/nameservice, /etc/default/spamassassin, /etc/default/snmpd, /etc/dhcp3/dhclient.conf, /etc/grub.d/10_linux/etc/openvpn/up.py, /etc/init/procps-late.conf, /etc/init.d/kdump, /etc/ip6tables.conf.in, /etc/ip6tables.conf.var, /etc/logrotate.d, /etc/network/if-up.d/group , /etc/network/if-up.d/dhcp3-relay, /etc/openswan/ipsec.conf, /etc/rc.local, /etc/syslog-ng/syslog-ng.conf

10.4. Network configuration

Apart from special setups where you need to fine-tune various performance parameters, the network configuration is a relatively simple task under PNS. You have to provide basic IP parameters, such as IP addresses, subnet masks, default gateway. The most important configuration file for networking is /etc/network/interfaces. This file contains separate sections for all network interfaces available in the given system. The official installation procedure of PNS involves steps to configure these basic IP parameters, so in a properly installed system this file is not empty.

Note

After the first upload of a configuration file edited in MC, the following three-line comment is displayed at the beginning of all editable files under MC:

#
# This file is generated by Management Server. Do not edit!
#

This warning reminds on that even if a file is edited manually, it is overwritten at the next upload of any change in MC. This warning can be ignored safely though in case the edited file is not planned to be used from MC in the future.

A typical interface configuration section in /etc/network/interfaces is the following.

auto eth0
iface eth0 inet static
	address 192.168.1.253
	netmask 255.255.255.0
	broadcast 192.168.1.255
	network 192.168.1.0
	gateway 192.168.1.254

After editing and saving this file, activate the changes by running the /etc/init.d/networking script with the restart argument. This applies to all other network configuration files, too.

Another, less frequently modified file of network configuration is /etc/hostname. It contains the hostname parameter of the system. It is important for the name resolution processes initiated by various system components. Whenever a process needs name resolution, that is, to map a name to an IP address, the first thing the system does, is that it checks this file to see whether its own hostname has been queried. The Linux command hostname queries this file as well.

The file / etc/mailname is important for the proper operation of the Postfix native service. It must not be empty and it gets filled automatically. You can alter the value stored here, if needed.

Another network configuration file, /etc/hosts may be used for static name resolution: it stores name to IP address mappings for network hosts. Before the DNS solution, this file was the only means to map hostnames to IP addresses. Today, most of its functionality has been taken over by DNS, but it is still useful in some scenarios. When a hostname needs to be looked up, /etc/hosts is the third place the system looks for a match – the first is /etc/hostname while the second is the in-memory DNS cache. Therefore, if there is a limited number of hosts the firewall often visits, among which there is, for example, a proxy server, it is recommended to list these hosts in /etc/hosts:

#
# This file is generated by Management Server. Do not edit!
#
127.0.0.1 localhost
192.168.1.253 proxy
192.168.1.100 mail

By default, there is only one entry in this file for the hostname localhost with the IP address 127.0.0.1. This entry is needed for system boot processes, therefore it shall not be deleted.

Similarly to hostnames, networks can be named with symbolic names. The file /etc/networks stores these mappings. By default, this file is empty on the firewall and PNS generally does not use it.

The /etc/resolv.conf file is used by the resolver library to find what DNS servers to query when a process needs to look up an IP address for a given hostname, or vice versa. In other words, this file lists the known nameservers for the firewall. Additionally, it contains an entry for the domain name of the firewall. This entry is also important for name resolution purposes: if, instead of a fully qualified domain name (FQDN) only a hostname is queried, the resolver automatically appends this domain name to the hostname and tries to look up the FQDN created this way.

#
# This file is generated by Management Server. Do not edit!
#
domain example.org
nameserver 192.168.1.200

This section introduces only briefly the network configuration files. For more detailed information and instructions on network configuration, see Chapter 5, Networking, routing, and name resolution, the references listed on networking in Appendix B, Further readings, and the manual pages for the mentioned files (man filename – without full path, for example: man interfaces).

10.5. System logging

Syslog-ng is the native and recommended logging service for PNS. Its configuration is stored in the /etc/syslog-ng/syslog-ng.conf file.

For more detailed information and instructions on system logging, see chapter Syslog-ng, the Syslog reference manual accessible from Appendix A, and the installed manual pages for both syslog-ng (the utility) and syslog-ng.conf (the configuration file).

After editing and saving the syslog-ng.conf file manually, restart the service by running the /etc/init.d/syslog-ng script with the restart/reload arguments. Its default configuration under PNS routes all relevant system, ISC BIND 9 and NTP messages to the /var/log/messages file and also to the console, /dev/tty8.

If you have a separate, central syslog-ng server for collecting messages from critical network hosts, such as the firewall(s), you can route (log) messages using the following steps.

  1. Set up a new destination of TCP or UDP type, with the IP address of your syslog-ng server, in syslog-ng.conf on the firewall.

    Example 10.1. Specifying the target IP address of a TCP destination

    The IP address of the syslog-ng server is 10.20.30.40 in this example.

    destination d_tcp { tcp("10.20.30.40" port(1999); localport(999)); };

    Supplying port information is optional; if port number is not set, the default ports are used.

  2. Decide what sources (s1, s2 here) shall be logged to the syslog-ng server and set up a log path accordingly. Note that filters are optional.

    log { source(s1); source(s2); filter(f1); destination(d_tcp); };
  3. Define a source on the syslog-ng server with the IP address of the firewall sending log messages.

    Note

    Specify the port numbers carefully. The corresponding ports must match on both sides.

10.6. NTP

The Network Time Protocol (NTP) is used for synchronizing system time with reliable time servers over the Internet. The synchronization is performed by a dedicated service, the NTP daemon (ntpd). The configuration file of ntpd is the /etc/ntp.conf file. The ntp.conf configuration file is read at initial startup by the NTP daemon in order to specify the synchronization sources, modes and other related information.

Unlike the system logging and network configuration files, ntp.conf does not have a manual page in the default installation of PNS. However, there are many useful sources available on NTP, see for details Chapter 9, Native services, the website of NTP protocol/service link in Appendix B, Further readings, RFC 1305 on NTP version 3, and the manual page of ntp.conf for accessing it on other Unix/Linux installations or on the Internet.

NTP itself can have a very sophisticated configuration with, for example, public key authentication, access control, or extensive monitoring options. At the very minimum, define a time server with which the firewall can synchronize time (the server key).

Add the following line in the configuration file.

server 10.20.30.40

Note

If more than one timeserver is supplied, the system time is more accurate, because during a time update all the listed servers are queried and a special algorithm selects the best (most accurate) of them.

Additionally, since PNS can be used as an authentic time source for the network, you can limit the number of concurrent client connections using the clientlimit key, and you can set a minimum time interval a client can synchronize time with the firewall using the clientperiod key.

After editing and saving the ntp.conf file manually, restart the service by running the /etc/init.d/ntp script with the restart argument. NTP can be chrooted as well, in which case the place of the configuration file is /var/chroot/ntp/etc/. The configuration can be edited here directly or else the original configuration file can also be used. In the latter case the jailer script updates the configuration inside the chrooted (jailed) environment. The jailer update process involves the following three steps:

  1. The original configuration file is modified.

  2. Jailer is run.

  3. The process (daemon) is restarted.

If MC is used for system configuration, the configuration files are automatically created inside the chrooted environment, so no special intervention is needed.

This method for updating jailed environments is the same for all other daemons that are to be jailed under PNS, such as ISC BIND 9.

System time is updated with the ntpdate command. Run the command as root, as usually from a system startup script so that system time gets adjusted during bootup. You can run the command manually, if needed.

10.7. BIND

BIND 9 is the official DNS server solution in PNS. BIND under PNS always runs in a chrooted environment, so its configuration file(s) are stored under the /var/chroot/bind9/etc/bind/ directory.

BIND 9 introduced the notion of split-DNS installations where basic access control can be applied to DNS Zone records. That is, for each record in the DNS database file you can specify whether outside resolvers can query those records ('public' records in DNS terminology) or they are only available to internal resolver clients ('private' records).

Choosing split-DNS setup is optional. In this case there are two configuration files:

  • named.conf, and

  • named.conf.shared.

The named.conf.shared file hosts information that is intended to be public, that is, accessible to outside resolvers.

Tip

Setting up a split-DNS configuration is reasonable to be used if the firewall is going to be an authoritative nameserver for one or more domains. If it is only used as a forward-only server, split-DNS is not necessary.

In forward-only configurations, only the named.conf file is used. Being a forward-only server, the nameserver under PNS does not perform recursive name resolution on the Internet for the internal clients, but instead, it forwards queries as is to the nameserver(s) configured as its forwarder(s).

This is probably the simplest functional named configuration possible, as only a single entry has to be edited in the configuration file.

forwarders {
  10.20.30.40; //IP address of the forwarder nameserver
};

You can use BIND 9 as a slave nameserver. In this setup, you do not maintain zone information on the firewall, instead, pull zone database records from an authoritative master nameserver through the zone transfer process. This setup provides fault tolerance, since if the master nameserver fails, the slave still contains a more or less up-to-date copy of the zone database.

The daemon running the BIND service is called named (hence the name for the configuration file), but the directory of the configuration file is still called bind.conf, after the name of the original Berkeley Unix implementation of the service. The startup script for the service is also called /etc/init.d/bind9.

For further information on BIND, see Chapter 9, Native services, and the references listed in Appendix B, Further readings.

10.8. Procedure – Updating and upgrading your PNS hosts

PNS uses the apt package manager application to keep the system up-to-date. Security and maintenance updates as well as product upgrades are all performed with apt. To update any host of your PNS Firewall solution (including PNS firewall hosts, AS and CF hosts, as well as your MS server), complete the following steps.

For more information on apt, see the apt-get manual page.

  1. Update the apt sources of the host. Use one of the following methods:

    • To upgrade from a DVD-ROM:

      1. Open the BalaSys website. To open it, it is necessary to authenticate with your support user credentials.

      2. Choose the necessary, preferably the latest version of the ISO file, and download it from the relevant cd directory.

      3. Burn the DVD-ROM to physical media.

      4. Mount the DVD-ROM on the host, and execute the following command as root: >:~#apt-cdrom add

    • To upgrade from the official BalaSys apt repositories from the Internet:

      1. Edit the /etc/apt/sources.list.d/PNS.list file and add the URLs of the package sources to download.

    For details on the required sources, see Procedure 4.2, Upgrading PNS hosts using apt in Proxedo Network Security Suite 2 Installation Guide.

    Warning

    Do not remove Ubuntu sources from the /etc/apt/sources.list file. These are necessary for upgrading the base operating system.

  2. Enter the following two commands.

    >:~#apt update

    >:~#apt dist-upgrade

    The first command updates the package list with the latest available versions. The second command performs the upgrade itself.

  3. The rest of the process is done automatically.

10.9. Packet filter

The vela packet filter configuration is stored in /var/lib/vela/config/nftables.d directory. The configuration can span in multiple files, generated by different vela components. Furthermore, besides these static configuration files, the vela-nfqueue-helper and vela-zone-helper daemons can upload additional packet filter rules dynamically, to support routing by vela zones and services.

The nftables' default directory /etc/nftables.d is symlinked to the directory containing the current vela packet filter configuration. To make packet filter configuration more resistant to errors, the uploaded configuration first will be tested for syntax errors, after that, if it is valid, copied to a temporary directory, where the symlink will point to. This will guarantee that, when an invalid configuration is uploaded accidentally, packet filter will keep using the last valid config, to ensure the firewall functionality and accessibility from the network.

After installing the firewall a default ruleset is active. Since PNS acts as a default-deny firewall, the ruleset allows only connections from the MS host machine specified during installation to the firewall and the outgoing connections originating from the firewall itself.

For more information, see the installed manual pages of nft (userland utility), and the documentation of Netfilter/nftables project including a detailed tutorial and HOWTO documents accessible from Appendix B, Further readings.

10.10. PNS configuration

The networking configuration of the firewall which involves IP addresses, hostnames, and resolver configuration, rarely changes. However, the daily administration of the firewall often requires the changing of the actual ruleset. For more information on this process, see section Creating PNS Policies.

Basically, the process can be divided into the following two main parts.

  1. Configuring the necessary service definition(s).

  2. Creating the matching packet filter ruleset, that is generating a skeleton.

The latter packet filter manipulation procedure is detailed in Section 10.9, Packet filter. This section shows how to edit a service definition locally.

The key configuration files needed are stored in the /etc/PNS directory. The following files play the most important roles in the configuration.

  • policy.py

    containing complete service definitions

  • instances.conf

    listing the instances used in the firewall together with their parameters

Tip

In the default installation of PNS there are two commented sample files, policy.py.sample and instances.conf.sample that are helpful in getting started with configuration.

To learn command-line policy management it is advised to first use MC to graphically generate test-policies and then to check the generated policy files through a terminal connection.

For background information on the possible contents of these files, see Chapter 6, Managing network traffic with PNS.

The configuration of PNS is based on the Python programming language. The configuration file ( policy.py) is a Python module in itself. This does not mean, however, that proficiency is required in Python, knowing the syntax of the language and a few semantic elements is sufficient. Though the configuration file may not seem like a complete Python module, it is important to know that it is parsed as one. The following syntactical requirements of Python apply:

Indentation is important as it marks the beginning of a block, similar to what curly braces ('{}') do in C/C++/C#/Java. This means that the way blocks are intended, must be consistent for that given block. The below example shows a correct syntax first followed by an incorrect syntax.

Correct:

if self.request_url == 'http://www.balasys.hu/':
  print ('debug message') return HTTP_REQ_ACCEPT
return HTTP_REQ_REJECT

Incorrect:

if self.request_url == 'http://www.balasys.hu/':
   print ('debug message')
  return HTTP_REQ_ACCEPT
return HTTP_REQ_REJECT

Getting used to correct indentation is probably the most important Python task for a beginner, especially without any C or C-like programming experiences. Indentation in Python is the only way to separate blocks of code since there are no Begin and End statements or curly braces. Otherwise, the language itself is quite simple and easy to learn. Note that Python is case-sensitive.

For more information on Python, see Appendix B, Further readings.

10.10.1. Policy.py and instances.conf

The Policy.py file has a strict structure that must be obeyed when modifying the configuration manually. It consists of the following code modules:

  • Import statements

  • Zone definitions

  • Class configurations

  • NAT policy settings

  • Authentication policy settings

  • Instance definitions

These modules are of varying length, depending on the complexity of the policy configuration.

10.10.1.1. Procedure – Edit the Policy.py file

  1. Set the import statements.

    The default-installed policy.py.sample file starts with the import statements:

    from PNS.Core import *
    from PNS.Plug import *
    from PNS.Http import *
    from PNS.Ftp import *

    These statements mean that one or more required (Python) front-end modules are imported to the configuration. PNS.Core is essential, however, the other three imports are included because the sample file contains references to these three proxy classes.

    Tip

    A good way of learning policy.py is to create firewall policies in MC and then look at the resulting configuration files.

  2. Provide the name of the firewall, and the zone definitions along with the access control defined for them, that is, the allowed outbound and inbound services.

    Zone("site-net", ["192.168.1.0/24"])

  3. Configure the classes used in service definitions.

    These class definitions can be simple, with, in essence, naming the proxy class to be used, that is, to be derived from only; like the IntraFtp class in the sample file:

    class IntraFtp(FtpProxy):
      def config(self):
        FtpProxy.config(self)

    Or, they can be rather complex, customizing the derived proxy class with attributes, as in the case of the IntraHttp class in the sample file:

    # Let's define a transparent http proxy, which rewrites the
    # user_agent header to something different.
    #
    class IntraHttp(HttpProxy):
      def config(self):
        HttpProxy.config(self)
        self.transparent_mode = TRUE
        self.request_headers["User-Agent"] = (HTTP_HDR_CHANGE_VALUE, "Lynx/2.8.3rel.1")
        self.request["GET"] = (HTTP_REQ_POLICY, self.filterURL)
        # self.parent_proxy = "proxy.site.net"
        # self.parent_proxy_port = 3128
        # self.timeout = 60000
        # self.max_keepalive_requests = 10
    
      def filterURL (self, method, url, version):
        # return HTTP_REQ_REJECT here to reject this request
        # change self.request_url to redirect to another url
        # change connection_mode to HTTP_CONNECTION_CLOSE to
        # force kept-alive connections to close
        log("http.info", 3, "%s: GET: %s" % (self.session.session_id, url))
  4. Define the instances to be used.

    Besides its name, the most important characteristic of an instance is the list of services it provides. Therefore, define services within the instances:

    # PNS_http instance
    def PNS_http () :
      # create services
      Service(name='intra_http', router=TransparentRouter(), chainer=ConnectChainer(), proxy_class=IntraHttp, max_instances=0, max_sessions=0, keepalive=V_KEEPALIVE_NONE)
      Service(name='intra_ftp', router=TransparentRouter(), chainer=ConnectChainer(), proxy_class=IntraFtp, max_instances=0, max_sessions=0, keepalive=V_KEEPALIVE_NONE)
      Rule(proto=6,
        dst_port=80,
        service='IntraHttp'
        )
      Rule(proto=6,
        dst_port=21,
        service='IntraFtp'
        )

    Still within the instance definition code block, with correct indentation, specify the firewall rules that will start these services.

These blocks, the zone definition, proxy class definition, instance definition, service definitions, and rule definitions make up the policy.py file. The provided example is simple, yet it provides a lot of information on the correct syntax and on the possible contents of the policy.py file.

The other configuration file, instances.conf is much more simple: it lists the instances to be run, and supplies some runtime arguments for them such as log level. The only compulsory argument for running an instance is the name of the Python file containing the corresponding instance definition. Although the example uses a single policy file ( policy.py) to store all definitions, it is possible to separate the policy to different .py files if it makes maintenance or archiving easier.

In the following example instance definitions are separated into two files, policy-http.py and policy-plug.py:

#instance arguments
#PNS_http --verbose=5 --policy /etc/PNS/policy-http.py
#PNS_plug --policy /etc/PNS/policy-plug.py

For more information on the configuration files, see the manual pages for instances.conf and Application-level Gateway. The manual pages can be accessed by using the man instances.conf and man PNS commands, installed by default on PNS. Also see the Appendix C, PNS manual pages in Proxedo Network Security Suite 2 Reference Guide for further details.

10.10.2. Application-level Gateway control

Starting and stopping firewall instances is performed automatically using the default installation. However, it is possible to control manually the firewall instances with the PNSctl utility. PNSctl starts and stops PNS instances using the instances.conf file. One or more instance names can be passed to PNSctl as arguments. If an error occurs while starting or stopping one of them, an exclamation mark ('?') is appended to the name of the instance as the script processes requests.

To control Application-level Gateway with PNSctl, enter the following lines.

PNSctl start|stop <instance name> <instance name> <...>

Besides the start and stop parameters for controlling instances, zopctl has some other parameters as well.

  • PNSctl status <instance>

    printing the status of the specified PNS instance

  • PNSctl szig <instance>

    displaying some internal information about the specified PNS instance

  • PNSctl inclog <instance>

    incrementing the logging level of the specified instance with one level

  • PNSctl declog <instance>

    decrementing the logging level of the specified instance with one level

For a full list of the available parameters with short explanations type PNSctl at a command prompt, without parameters, or issue the man PNSctl command. The manual page of PNSctl is also available at velactl(8) in Proxedo Network Security Suite 2 Reference Guide.

10.11. Managing core dump files

PNS uses systemd-coredump to capture application firewall crashes. systemd-coredump provides the coredumpctl command line tool to manage core dumps. It creates compressed core dump files under /var/lib/systemd/coredump directory, the related logs into the journal, and the syslog-ng /var/log/messages file.

The coredumpctl list presents a summary about the core dump events. The coredump dump <PID> -o /path/to/uncompressed/coredump command line extracts a core dump to a given file referenced by the process's PID.

By default, systemd-coredump uses maximum 10% of the underlying storage of /var/lib/systemd/coredump directory, but leaves at least the 15% free space. The preferred setting ensures more than 200 GByte storage under the filesystem holding the /var/lib/systemd/coredump directory for the host. For smaller partitions it is recommended to customize systemd-coredump settings with the provided FreeText template and to define more space than 10% for the core dump usage and more than 15% free space with the help of the MaxUse= and KeepFree= parameters.

Chapter 11. Key and certificate management in PNS

The use of cryptography, encryption and digital signatures is becoming more and more widespread in electronic communication. Nowadays they are an essential part of e-business and e-banking solutions, as well as other fields where the identity of the communicating parties has to be verified. Communication through secure (encrypted) channels is also becoming increasingly popular. This chapter offers a brief introduction into the fields of cryptography and the public key infrastructure (PKI), describing how they can be used for authentication and secure communication, and the PKI system developed for MS to support them.

11.1. Cryptography basics

The goal of using encryption in communication is twofold: to guarantee the privacy of the communication, so that no third party can acquire the information, and to verify its integrity — to make sure that it was not damaged (or deliberately modified) on the way. Privacy can be guaranteed by the use of encryption algorithms, while integrity protection requires the application of hashing algorithms. Secure communication utilizes both of these techniques.

The main concepts and requirements of secure communication are the following:

  • Both the sender and the receiver of the message have access to the algorithm (this is practically a piece of software, often used transparently to the actual user) that can be used to encrypt and decrypt the message.

  • Both have access to a special piece of information — so called key — that is required to encrypt and to successfully decrypt the message.

  • An encrypted message cannot be decrypted without the proper key, even if the encryption algorithm is known.

  • The receiver can identify if the encrypted message has been damaged or modified. (Remember, it is not necessary to understand the message to mess it up.)

  • Both the sender and the receiver can verify the identity of the other party.

11.1.1. Symmetric and asymmetric encryption

There are two main categories of encryption methods for ensuring privacy: symmetric and asymmetric encryption.

11.1.1.1. Symmetric encryption

Symmetric encryption algorithms use the same key for the encryption and decryption of a message, therefore the same key has to be available to both parties. Their advantage is their speed, the problem is that the key has to be transferred to the receiver somehow. The keys used in symmetric encryption algorithms nowadays are usually 128-256 bit long.

11.1.1.2. Asymmetric encryption

Asymmetric encryption methods use different keys for the encryption and the decryption of a message. The sender generates a keypair, messages encrypted with one of these keys can only be decoded with the other one. One of these keys will be designated as the private key, this will be used to encrypt the messages. The other key, called public key is made available to anyone the sender wishes to send messages to. Anyone having access to the encrypted message and the public key can read the encrypted message and be sure that it was created with the appropriate private key. Certain encryption algorithms (like RSA) make it also possible to encrypt a message using the public key, in this case only the owner of the private key can read the message. The disadvantage of asymmetric encryption is that it is relatively slow and computation intensive. A suitable infrastructure for exchanging public keys is also required; this is needed to verify the identity of the sender, confirming that the message is not a forgery. This topic is discussed in Section 11.1.1.3, Authentication and public key algorithms The length of the keys used in asymmetric encryption ranges from 512 to 4096 bits.

Tip

It is recommended to use at least 1024 bit long keys.

11.1.1.3. Authentication and public key algorithms

Being able to decrypt a message using the appropriate public key guarantees only that it was encrypted with its matching private key. It does not mean that the person (or organization) who wrote the message is who he claims to be — that is, the identity of the sender cannot be verified this way. Without an external way to successfully verify the identity of the other party, this would make communication based on public key algorithms susceptible to man-in-the-middle attacks. To overcome this problem, the identity of the other party has to be confirmed by an external, trusted third party. Two models have evolved for that kind of identity verification: web of trust and centralized PKI.

Web of trust and centralized PKI

In a web of trust based system (such as PGP), individual users can sign the certificate (including the public key and information on the owner of the key) of other users who they know and trust. If the certificate of a previously unknown user was signed by someone who is known and trusted, the identity of this new user can be considered valid. Continuing this scheme to many levels, large webs can be built. Web of trust does not have a central organization issuing and verifying certificates — this is both the strength and weakness of such systems.

In centralized PKI — as its name suggests — there are certain central organizations called Certificate Authorities (CAs) empowered to issue certificates. Centralized PKI systems are described in detail in Section 11.2, PKI Basics.

11.1.1.4. Usage of encryption algorithms for secure communication

In real-world communication, the two types of encryption are used together: a (symmetric) session key is generated to encrypt the communication, and this key is exchanged between the parties using asymmetric encryption.

The general procedure of encrypted communication is the following:

Certificate-based authentication

Figure 11.1. Certificate-based authentication


11.1.1.4.1. Procedure – Procedure of encrypted communication and authentication

  1. The sender and the receiver select a method (encryption algorithm) for encrypting the communication.

  2. The sender authenticates the receiver by requesting its certificate and public key. Optionally, the receiver can also request a certificate from the sender, thus mutual authentication is also possible. During the handshake and authentication the parties agree on a symentric key that will be used for encrypting the data communication.

  3. The sender encrypts his message using the symmetric key.

  4. The sender transmits the message to the receiver.

  5. The receiver decrypts the message using a symmetric key.

  6. The communication between the parties can continue by repeating steps 3-5.

Another important aspect is that suitable keys have to be created and exchanged between the parties, which also requires some sort of secure communication. It also has to be noted that — depending on the exact communication method — the identity of the sender and the receiver might have to be verified as well.

The strength of the encryption is mainly influenced by two factors: the actual algorithm used, and the length of the key. From the aspect of keylength, the longer the key is, the more secure encryption it offers.

11.1.1.5. Hashing

Hashing is used to protect the integrity of the message. It is essentially a one-way algorithm that is capable of creating a fixed-length extract of the message (or document). This extract (hash) is:

  • specific to the given document,

  • changing even a single bit in the document changes the hash,

  • it is not possible to predict how a certain change in the document modifies the hash (that is, it is impossible to predict the hash),

  • it is not possible to recover the original document from the hash.

11.1.1.6. Digital signature

A digital signature is essentially the hash of the signed document, encrypted by the private key of the signer. The genuineness of the document can be verified by generating the hash of the document received, decrypting the signature using the public key of the sender, and comparing the hash contained in the signature to the one generated from the received document. If the two hashes are identical, the document received is the same as the one sent by the sender, and has not been modified on the way.

11.2. PKI Basics

The purpose of PKI system is to provide a way for users to reliably authenticate each other. This requires the users to have private-public keypairs (as described in Section 11.1.1.2, Asymmetric encryption), some sort of certificate to verify the users identity, and a system to manage and distribute keys and certificates. For verifying the identity of a user, either centralized PKI systems, or webs of trust can be used.

11.2.1. Centralized PKI system

The centralized model is based on authorizing institutes, so called Certificate Authorities (CA) to verify the identity of the user or organization and certify it in a digital certificate. Since there is no single, worldwide CA guaranteeing the identity of everyone, the identity of a party can be considered valid if its certificate was signed by a trusted CA. A trusted CA is a CA that has been decided to be trustworthy, there is no general algorithm or method to determine which CAs can be trusted. A 'trusted CA list' includes the certificates of all the CAs deemed trustworthy.

11.2.1.1. CA chains and Root CAs

CAs themselves also have to certify their identity, meaning they also need certificates. These certificates are usually signed by another, higher level CA. This allows for hierarchies of CAs to be created, in a way that although a CA might not be explicitly trusted (because it is unknown, therefore it not on the list of trusted CAs), but the higher-level CA that signed its certificate might appear on the list (which makes the lower-level CA trustworthy).

Obviously, using this method alone is not sufficient, since it always requires a higher-level CA. Therefore, self-signed CA certificates also exist, meaning that the CA itself has signed its own certificate. This is not uncommon; a CA with a self-signed certificate is called a root CA, because there is no higher-level CA above it. To trust a certificate signed by this CA, it must necessarily be in the 'trusted CAs' list.

Certificate chains

Figure 11.2. Certificate chains


Note

To allow easier management, the trusted CA lists usually contain only root CAs.

11.2.2. Digital certificates

A digital certificate is a digital document conforming to the X.509 standard that certifies that a certain public key is owned by a particular user or organization. This document is signed by a third party (the CA). This data file contains the public key of its owner, as well as the following information:

  • Not before/Not after: Validity (from/to date) of the certificate.

  • Purpose: For what end may the certificate be used (for example, digital signature, data encryption, and so on).

  • Issuer: The Distinguished Name of the Certificate Authority that signed the certificate.

  • Subject: The Distinguished Name of the owner of the certificate.

  • Distinguished Name: The distinguished name (DN) usually contains the following information (not all the fields are mandatory, and other optional fields are also possible). A DN is often represented as a comma-separated list of fieldname-value pairs.

    • Country: 2-character country/region code.

    • State: State where the organization resides.

    • Locality: City where the organization resides.

    • Organization: Legal name of the organization.

    • Organizational Unit: Division of the organization.

    • Common Name: The common name is often the address of the website or the domain name of the organization, for example, www.example.com, or the name of the user in case of personal certificates.

11.2.3. Creating and managing certificates

When an organization wishes to create a certificate, it has to perform the following:

11.2.3.1. Procedure – Creating a certificate

  1. Generate a private-public keypair.

    Tip

    The secure storage of private keys has to be solved.

  2. Prepare a certificate signing request (CSR). For filling the request form, the information contained in the distinguished name has to be provided (for example, common name, organization, and so on).

  3. The CSR is bundled together with the public key of the generated keypair.

  4. The organization selects a CA to sign the certificate request. The CSR has to be submitted to a special department of the CA, called Registration Authority (RA).

  5. The RA verifies the identity of the requestor.

    Note

    Submission of the CSR to the RA and the identity verification involves physically visiting the RA with all the papers it requires for verifying the identity of the organization and its representative (for example, documents of incorporation, ID cards, and so on).

  6. If the RA confirms the identity of the requestor, the CA signs the request using its private key, and issues the certificate.

    Tip

    If the certificate is to be used only internally (as in the case of PNS components), an own CA with a self-signed certificate can be set up to sign the certificates.

  7. The requestor can now import and use the certificate on his machines.

  8. If a certificate loses its validity or becomes obsolete, it should not be accepted anymore and is to be revoked or refreshed.

Basically the CA has the following functions:

Tip

Although to efficiently use certificates over the Internet they need to be signed by well-known Certificate Authorities, this is not required if they are used only locally within an organization. For such cases, the organization itself can create a local (internal) CA and sign the certificate of this CA. This CA having a self-signed certificate (thus it becomes the local root CA) can then be used to sign the certificates used only internally.

11.2.4. Verifying the validity of certificates

To decide whether a given certificate is valid or not, the followings have to be checked:

  • It was signed by a trusted CA.

    Note

    If the certificate of the CA signing the given certificate was signed by a trusted CA (or by another CA lower in the CA chain), the certificate can be trusted. Sometimes this CA chain can consist of several levels.

  • It is not out-of-date.

  • It has not been revoked.

  • The purpose of the certificate is appropriate, that is, it is used for the issued intention.

Note

It is possible to submit certificate signing requests (CSRs) to more than one CA (and have them signed) using the same public key. However, it is considered to be highly unethical, likely resulting in the revocation of all of the certificates involved.

11.2.5. Verification of certificate revocation state

PNS supports the following two solutions from the available methods for the verification of certificate revocation state:

  • Certificate Revocation Lists (CRLs)

  • Online Certificate Status Protocol (OCSP) stapling

Both methods are available for client- and server-side verifications as well in encryption policies.

When setting up and performing revocation checking, the encryption policies do not separate the two methods. If revocation checking is enabled, then PNS attempts to gain valid revocation information using both methods and uses any valid result.

11.2.5.1. Certificate Revocation List - CRLs

Certificate Revocation List (CRL) is a list containing the serial numbers and distinguished names of certificates that cannot be trusted anymore and were hence revoked. If a certificate loses its validity for any reason (for example, becomes compromised because its private key is stolen) the issuing Certificate Authority (CA) revokes it. This is published on the website of the CA in a CRL. Expired or compromised certificates shall not be used either internally.

CRLs can be obtained usually through HTTP, certificate authorities update and publish them on their website on a regular basis.

11.2.5.2. Online Certificate Status Protocol (OCSP) stapling

Online Certificate Status Protocol (OCSP) stapling is an alternative to the so far available Certificate Revocation Lists (CRL) in verifying the validity of certificates. The protocol is described in details in IETF RFC 6960. With OCSP stapling it is possible to define to what level of strictness, the encryption policies shall check the revocation status of the certificates.

Online Certificate Status Protocol stapling provides the following benefits:

  • The solution enables a more convenient solution of assigning server operators to keep revocation information up-to-date instead of requiring that from clients.

  • Due to the smaller size of the used traffic data during OCSP stapling compared to CRL processes, the network load is smaller as well.

  • Clients can verify the revocation state of a certificate with minor overhead.

OCSP stapling provides a potentially faster revocation state with less traffic. The responsibility of obtaining a certificate revocation state is moved from the client (e.g. web-browser) to the server. The servers fetch revocation information of their certificates and cache this information for a short period of time. When a client attempts to establish a secure connection with the server, the server staples the revocation state to the certificate it is sending to the client.

For more details, see Section 3.2.4, Configuring Encryption policies in Proxedo Network Security Suite 2 Reference Guide.

11.2.6. Authentication with certificates

Authentication with certificates is accomplished by checking the validity of the certificates of the communicating parties.

One-way authentication: One of the parties (typically the client) requests a certificate of the server and checks its validity.

Mutual (two-way) authentication: Both the client and the server check the validity of the other's certificate. Generally both parties must own a trusted certificate (that is, a certificate signed by a trusted certificate authority).

11.2.7. Digital encryption in work

SSL provides endpoint authentication and communications privacy, as well as possibility for one-way or mutual authentication using certificates. The protocol allows client/server applications to communicate without being subject to eavesdropping, tampering, or message forgery. SSL runs on layers beneath application protocols (for example, HTTP, SMTP, and so on) and above the TCP transport protocol. SSL is able to use a number of symmetric and asymmetric encryption algorithms. The certificates used in the communication must conform to the X.509 standard.

IPSec is a set of protocols for securing packet flows and key exchange by encrypting and/or authenticating all IP packets. As IPSec is an obligatory part of IPv6 (and optional in IPv4), it can be expected that it will become increasingly widespread. IPSec provides end-to-end security for packet traffic — even for UDP packets, because it operates over the IP layer. In PNS, IPSec is used to construct Virtual Private Networks (VPNs). Please refer to Chapter 16, Virtual Private Networks for more details.

Note

PNS supports the use of the Secure Sockets Layer (SSLv2 and SSLv3), Transport Layer Security (TLSv1) and IP Security (IPSec) digital encryption protocols.

11.2.8. Storing certificates and keys

When importing/exporting keys and certificates, they can be stored in various file formats. MS supports the use of the PEM, DER, and PKCS12 file formats. The main differences between them are summarized below.

  • PEM: PEM (Privacy Enhanced Mail) is an ASCII text format that can store all parts of the certificate, that is, certificate, certificate signing request (CSR), Certificate Revocation List (CRL), private key (which can be optionally protected with a password). It is not necessary to store all parts in a single file.

    Tip

    If nothing restricts it, it is recommended to use the PEM format.

  • DER: The DER (Distinguished Encoding Rules) format stores any single part of a certificate in a binary file.

  • PKCS12: The PKCS12 (Public Key Cryptography Standards) is a binary file format developed to provide an easy and convenient way to backup or transport certificates. The file always contains a password-encrypted private key and the associated certificate.

11.2.9. Using Hardware Security modules

A Hardware Security module (HSM) is a physical device that stores and manages secrets (typically private keys) and can execute cryptographic operations using the keys stored within. The secrets itself never leave the HSM, that way, sensitive data can be kept secure in an external, more controlled environment, decreasing the risks of compromising critical sensitive data.

A HSM can be accessed typically via PKCS#11 API. PKCS#11 is a standard that defines a platform-independent interface to cryptographic tokens, HSMs and smart cards. PKCS#11 API can be accessed using a driver/library provided by the OS or the device manufacturer.

11.3. PKI in MS

The purpose of including a light-weight PKI system in MS is to provide a convenient and efficient way to manage and distribute certificates and keys used by the various components and proxies of the managed PNS hosts. It is mainly aimed at providing certificates required for the secure communication between the different parts of the firewall system, for example, PNS hosts and MS engine (the actual communication is realized by agents). The PKI of MS also provides a consistent and convenient tool to manage both internal and external certificates between the firewalls. MS can be set to perform the regular distribution of certificates and Certificate Revocation Lists (CRLs) automatically, ensuring that no invalid or revoked certificate can be used.

Note

It has to be noted that the PKI of PNS is not a general purpose PKI system, consequently it is not recommended to be used as such. It was designed and intended for internal use between the components of the firewall system (to secure the communication between PNS hosts and MS servers, monitoring agents, and so on), and to manage external certificates available on the managed hosts.

Tip

The PKI system of MS can also manage certificates signed by external CAs. This is useful because MS provides an efficient way to handle the distribution of certificates among the managed hosts.

11.3.1. Committing changes and locking in PKI

When an administrator starts to modify the PKI settings (either on one of the Edit Certificates panels or the Site Preferences), the PKI component is locked from other administrators. Changes are committed automatically.

11.3.2. The certificate entity

MS manages the certificates, their accompanying keys, as well as the related certificate signing request (CSR) and Certificate Revocation Lists (CRL(s)) as a single entity. Therefore when using a key, certificate, CSR or CRL in connection with MS, this single entity containing all of them is referred. This is important to remember even if not explicitly stated in the text.

In MS, a certificate entity has two different names, these are:

  • Unique name: The unique name is the name used to unambiguously identify the certificate entity (and its different parts) in MS. This name does not appear in the certificate, it is required only for management purposes.

  • Distinguished name: It is the distinguished name (DN) of the owner of the certificate. (Sometimes only the Common Name part is shown.) For more information, see Section 11.2.2, Digital certificates.

11.3.3. Rules of distribution and owner hosts

The owner host is the machine allowed to use the private key. (For example, when specifying on a host which certificate should be used for authentication to management agents, only the certificates owned by the given host can be selected.) It is important to set the owner host of a certificate otherwise it would be impossible to use that certificate entity for all purposes (like authentication).

Distribution of certificates can be handled automatically by MS. MS examines which certificates are used by the given host, and deploys only those. This ensures that certificates are not unnecessarily present on all machines.

Any part of the certificate entity has to be deployed to the proper host in order to be used. Two main rule governs the distribution (deployment) of certificate entities:

  • Every certificate entity is distributed only to those hosts that actually use it, and only the used parts are deployed.

  • The private key can be used only on the host(s) that are set as the owner host of the certificate entity. (Therefor the private key is only distibuted to the owner host of the certificate entity.)

Note

CAs do not belong to a single host, but to the whole site, therefore their certificate entity (including their private key) can be made available on each host.

Certificates (not the full entity, only the certificate part) can be distributed everywhere.

Warning

Distribution should only be performed for complete, consistent settings. Distributing incomplete or only partially refreshed configuration can lead to lockouts. This is especially true when regenerating the keys of transfer agents. To prevent such situations, it might be useful to disable the automatic distribution when making large modifications to the PKI system, and re-enable it only after the new configuration is finished.

11.3.4. Trusted groups

Trusted CAs can be organized into so called trusted groups for more convenient use, especially for configuring proxies using certificates for authentication. In MS policies, CAs are referenced through the trusted groups containing them.

Tip

The use of trusted groups is useful for example when configuring SSL proxying, especially if connection only to servers having a certificate issued by a well-known and trusted CA (that is, not self-signed) is permitted. For more information on SSL, see Chapter 3, The PNS SSL framework in Proxedo Network Security Suite 2 Reference Guide.

11.3.5. The PKI menu

The PKI system of MS can be accessed by using the PKI menu of the main menu bar.

The PKI menu

Figure 11.3. The PKI menu


The following sections introduce the function and use of each menu item.

11.3.5.1. Site Preferences

The Site Preferences menu can be used to apply site-wide parameters to the CAs.

Site Preferences

Figure 11.4. Site Preferences


The parameters in details are:

  • Automatic distribution properties

    • Refresh base and refresh interval: These parameters define the starting time and the interval of the automatic certificate distribution, that is, at what time the distribution of certificates and CRLs should start, and how often it should be performed.

      Tip

      It is recommended to perform automatic distribution every 4 or 6 hours.

  • Default distinguished name: The fields of of these parameters will be automatically filled when creating new CA certificates or CSRs, which is especially useful if a large number of certificates has to be created.

11.3.5.2. Distribution of certificates

The automatic certificate distribution can be enabled from the PKI menu, and will be performed based on the parameters set under the Site Preferences menu item. Manual distribution can be performed by selecting Distribute Certificates from PKI in the main menu. When distributing CA certificates, the CRLs are also distributed.

11.3.5.3. The Edit Certificates menu

Most of the actual PKI-related tasks can be performed using the Edit Certificates menu item. Selecting this item displays the PKI management window of the selected site.

The Edit Certificates menu

Figure 11.5. The Edit Certificates menu


This window has the following tabs:

  • PKI management tab is used for managing local CAs. This includes managing certificates and certificate signing requests, refreshing keys, and so on.

  • Trusted CAs tab is for managing trusted certificate authorities, creating new ones, grouping them, and so on.

  • Certificates tab is for managing certificates: creating new certificate signing requests (CSRs), as well as for importing/exporting certificate entities.

On all three tabs, information about the currently selected certificate (or CA certificate) is displayed in the lower section of the panel. This information includes the following data:

Certificate information

Figure 11.6. Certificate information


The following data is displayed:

  • the distinguished name of the CA issuing the certificate

  • the subject of the certificate

  • the validity period of the certificate

  • the information on the algorithm used to generate the keys, including the length of the key

  • any X.509 extensions used in the certificate

    Note

    The X.509 standard for certificates supports the use of various extensions, for example, to specify for what purposes the certificate can be used, and so on. For details on the possible extensions, see Appendix B, Further readings.

11.3.6. PKI management

A tree-like navigation window displays the managed internal CAs. On a newly installed system only local CAs created by default are available. Expired certificates are shown in red.

The PKI management navigation window

Figure 11.7. The PKI management navigation window


The internal CAs have small arrows that can be used to display the certificates issued and revoked by the CA.

For a given certificate, the following information is displayed:

  • the common name of the certificate

  • the validity (not before and not after)

  • the state, whether the certificate is active (a) or pending (p)

    A certificate becomes pending if the certificate of the CA issuing it (or the certificate of a CA higher in the CA chain) is refreshed. A certificate has to be refreshed if its validity period has expired, even if its private key has not changed. This is because the hash of the refreshed certificate is different from the old one.

    Warning

    When the certificate of a CA is refreshed, all certificates issued by the CA have to be refreshed (reissued) as well. If the CA has issued certificates for sub-CAs, then also the certificates issued by these subCAs have to be refreshed.

11.3.6.1. The command bar of PKI management

The Command bar of the PKI management window contains the different commands that can be issued for the certificate or the CA selected.

PKI management commands

Figure 11.8. PKI management commands


The available commands are:

  • Sign: This action is available only for internal CAs, used to sign certificate signing requests (CSRs). After clicking on it, a list of unsigned CSRs is displayed. The list shows the distinguished names of the CSRs. Parameters for the certificate to be signed can be overridden here (period of validity, X.509 extensions, and so on).

    Note

    It is possible to multi-select a number of certificates for this activity, that is to sign multiple internal CAs or CSRs at once.

  • Refresh: This command can be used to refresh certificates, that is, to renew them by extending their validity period if expired, or also to create new keys to the certificate. Key generation is only performed if the Regenerate private key checkbox is selected.

    Tip

    It is recommended to regenerate the keys as well when refreshing a certificate for any reason.

  • Refresh CRL: It is available only for CAs. The CRL of the CA is valid until the time specified. The refreshed CRL will only be used on the managed hosts after distribution. MS distributes certificate entities, that is, when distributing certificates the corresponding CRLs are automatically distributed as well.

  • Revoke: It is available only for certificates signed by an internal CA. It marks the certificate as invalid and adds it to the CRL of the CA. CA certificates can also be revoked this way.

    Note

    Self-signed certificates (that is, certificates of local root CAs) cannot be revoked.

    Note

    It is possible to multi-select a number of certificates for the Revoke activity. However, if the Issuer of the selected certificates is not the same, the Revoke button will not be active.

    Note

    If any certificate selected for Revoke is in use in the current configuration, a warning will be displayed to inform the administrator. It is important that in case a certificate is in use, it cannot be revoked. If the certificate in use is part of a multiple selection of certificates for the Revoke activity, none of the selected certificates will be revoked.

    If any of the certificates selected for Revoke is used in the configuration, a similar warning is displayed:

    Certificate used in the configuration warning

    Figure 11.9. Certificate used in the configuration warning


The table below briefly summarizes the CAs created and used by default in PNS.

Name of the CA Purpose
MS_Root_CA The Root CA of PNS is used to sign certificates of all other local CAs in PNS.
MS_Engine_CA It signs the certificate of the MS engine.
MS_Agent_CA It signs the certificates of the transfer agents.

Table 11.1. Default CAs and their purpose


For details on configuring agent and engine certificates, please refer to Chapter 13, Advanced MS and Agent configuration.

11.3.7. Trusted CAs

This menu item is for managing certificate authorities. The upper section of the panel displays the list of available CAs, both internal and external. Apart from creating the default internal CAs, a number of trustworthy and external certificates (for example, VeriSign, NetLock) are imported as well.

Trusted CAs

Figure 11.10. Trusted CAs


The following information is displayed on each CA:

  • Common Name: It displays the common name of the CA.

  • Parts: It denotes the components of the certificate entity available for the CA.

    • c: It stands for certificate. Usually this is the only part available for external CAs.

    • k: It denotes the private key of the certificate.

    • r: It refers to the certificate signing request (CSR).

    • l: It is the CRL of the CA.

    An internal CA is fully functional if all of its parts are available (CRL is optional).

  • Trusted groups: It denotes the name(s) of the trusted groups that the CA is member of.

  • Not before/not after: It defines the validity of the CAs' certificate.

  • CRL expiry: It defines the date until the CRL of the given CA is valid. If this field is empty, no CRL has been released by the CA so far.

11.3.7.1. The command bar of Trusted CAs

The command bar contains various operations that can be performed with the CAs. Some of them require a CA to be selected from the information window, in this case the given operation will be performed on the CA selected. Note, that for some of the activities multi-select option is available for performing mass activity. These possibilities are described in details at each activity.

  • New CA: Create a new local Certificate Authority. For details, see Procedure 11.3.7.2, Creating a new CA. As CAs require unique names, they can only be created one by one.

  • Import: Import a CA certificate from a PEM, DER, or PKCS12 formatted file. It is possible to import only one CA at once. The Import into selected object can only be selected if at the time of the Import only one line is selected in the Trusted CA list.

  • Export: Export the certificate of the selected CA into a file in PEM, DER, or PKCS12 format. The PKCS12 format is only available for internal CAs.

  • Owner: A CA available on a site, can be made available on all sites managed by MS, by clicking this button and checking in the Available on all sites checkbox. Making a CA certificate available on all sites cannot be reversed, that is, once a CA has been made available on all sites, later it cannot be limited to a single site. This has the same effect as checking in the corresponding checkbox when creating a new CA.

    Warning

    This operation cannot be reversed or undone.

  • Self sign: Self-sign the certificate signing request (CSR) of the selected local CA. Only certificates not yet signed by a CA can be self-signed. This activity can be implemented one by one on the items, no multi-selection is possible.

    Note

    Local root CAs can be created by self-singing a so far unsigned CSR of a Trusted CA.

  • CRL settings: Set the parameters for refreshing the CRL of the selected external CA. Note, that only single-selection is possible.

    CRL settings

    Figure 11.11. CRL settings


    The following parameters can be set:

    • Refresh base: It defines at what time the retrieval of the CRL shall be started.

    • Refresh interval: It defines how often the CRL shall be retrieved.

      By setting the Refresh base to 00:00 and the Refresh interval to 04:00, the CRL will be downloaded every four hours, starting from midnight.

    • Refresh URL: The location of the CRL can be retrieved with this parameter setting. The CRL can be downloaded through HTTP.

      Note

      It is very important to set the refresh URL option, otherwise the validity of the certificates issued by the CA cannot be reliably verified. The CRL shall be downloaded and automatically distributed regularly.

    • Data type: It defines the format of the CRL to be downloaded (PEM or DER).

  • Password: Change the password of the selected local CA or it is possible to define a password here if it has not been configured yet.

  • Revoke: Revoke the certificate of the selected local CA that was signed by another local CA. Self-signed CA certificates cannot be revoked this way. For details, see Procedure 11.3.8.3, Revoking a certificate.

    Note

    It is possible to multi-select a number of certificates for the Revoke activity. However, if the Issuer of the selected certificates is not the same, the Revoke button will not be active.

    Note

    If the certificate(s) selected for Revoke is in use in the current configuration, a warning will be displayed to inform the administrator. It is important that in case a certificate is in use, it cannot be revoked. If the certificate in use is part of a multiple selection of certificates for the Revoke activity, none of the selected certificates will be revoked.

  • Delete: Delete the selected certificate. For details, see Procedure 11.3.8.4, Deleting certificates.

    Note

    It is possible to multi-select a number of certificates for the Delete activity. If the certificate(s) selected for Delete is in use in the current configuration, a warning will be displayed to inform the administrator. It is important that in case a certificate is in use, it cannot be deleted. If the certificate in use is part of a multiple selection of certificates for the Delete activity, none of the selected certificates will be deleted.

11.3.7.2. Procedure – Creating a new CA

  1. Navigate to the Trusted CAs tab of the PKI/Edit certificates menu, and click on New CA.

    The Trusted CAs command bar

    Figure 11.12. The Trusted CAs command bar


  2. Enter the required parameters for the subject of the new CA's certificate. It is required that the CA has a unique Common Name, but is is also helpful if the Common Name is descriptive as well, as it helps to remember the CA's function later.

    Creating a new CA

    Figure 11.13. Creating a new CA


  3. Select the encryption algorithm and key length to be used.

    Tip

    The key of the CA certificate shall be longer than the ones that will be issued by the CA, for example, if the CA is used to sign certificates having 1024 bit keys, the key of the CA certificate shall be at least 2048 bit long.

  4. Select the signature digest (hash) method to be used.

    Tip

    Use of the SHA1 algorithm is recommended, as it is considered to be more secure and not significantly more computation intensive.

  5. Provide a password to protect the private key of the CA. This is required so that only authorized users can sign certificates.

  6. Click on Extensions ..., and specify for which purposes the certificate will be used.

    Specifying extensions

    Figure 11.14. Specifying extensions


    Note

    The use of extensions is optional.

  7. When creating a local root CA, check the Generate self-signed certificate checkbox and specify the validity period of the certificate.

    Tip

    If the CA is to be available on every site managed, do not forget to check in the appropriate checkbox when creating the New CA.

    Warning

    A CA available on a site, can be made available on all sites managed by MS, by checking in the Available on all sites checkbox. Making a CA certificate available on all sites cannot be reversed, that is, once a CA has been made available on all sites, later it cannot be limited to a single site.

11.3.7.3. Managing trusted groups

The Available groups are displayed on the right side of the panel, while Trusted groups, listing the groups that the selected CA is member of, are displayed on the left. Adding or removing a CA to a group can be performed by selecting the CA for configuration, selecting or multi-selecting the groups that are wished to be moved and using the arrow-shaped icons in the middle.

Trusted CA groups

Figure 11.15. Trusted CA groups


11.3.7.4. Procedure – Signing CA certificates with external CAs

If you want to use an external CA to sign the certificate of a local CA, complete the following steps.

  1. Generate a private-public keypair and an associated certificate signing request (CSR) using the Generate button of the Certificates tab.

  2. Export this CSR into a file using the Export button.

  3. Have the CSR signed.

  4. If the CA approves your identity and signs the certificate, Import it to the PKI system of MS.

    Note

    Make sure the appropriate entity is selected (that is, the signed certificate to the proper CSR is imported) and the Import into selected object option is checked in.

  5. The certificate entity can now be distributed and used on your machines.

11.3.8. Managing certificates

All non-CA certificates available on the selected site can be managed here. It is also possible to import and export certificates.

Certificates

Figure 11.16. Certificates


The upper section of the panel displays the list of available certificates, along with the following information on each:

  • Unique name: It is the unique name of the certificate entity.

  • Common name: It is the common name of the certificate.

  • Parts: It displays the available parts from the certificate entity:

    • c: For external certificates usually only their certificate (c) is available.

    • k: For internal certificates their private key (k) can also be available.

    • r: For internal certificates the certificate signing request (CSR) can also be available.

  • Issuer: This is the CA that has signed the certificate. This field is empty if the CSR is not yet signed.

  • Owner host: It determines which host can use the private key.

  • Not before/Not after: The certificate's period of validity.

11.3.8.1. The Certificates command bar

The command bar contains buttons that can be used to perform various operations on the certificates available on the site. Some of them require a certificate to be selected from the information window, in this case the given operation will be performed on the selected certificate.

The Certificates command bar

Figure 11.17. The Certificates command bar


  • Generate: Generate a new certificate signing request (CSR).

  • Import: Import a certificate (or a part of it) from a PEM, DER, or PKCS12 formatted file. It is possible to import only one certificate at once. The Import into selected object can only be selected if at the time of the Import only one line is selected in the Certificates list.

  • Export: Export the selected certificate (or a part of it) into file in PEM, DER, or PKCS12 format. In case of exporting one certificate the name of the exported file has to be provided. Note, that the certificate must have a Common Name. In case a private key is also exported, a password can also be defined for it. However, it is also possible to select multiple certificates for Export. In that case the certificates will be exported to a selected folder into files named after their unique names. In case private keys are also exported, the passwords belonging to these private keys will be generated into the same folder, named after the unique names of the certificates, with .txt file extension format.

  • Owner: The owner host of the certificate can be specified here. Also, the certificate available on a site, can be made available on all sites managed by MS, by checking in the Available on all sites checkbox. This function is reversible though and the owner host can also be changed later as well.

  • Revoke: Revoke the selected certificate. This operation requires the password of the issuer CA.

    Note

    It is possible to multi-select a number of certificates for the Revoke activity. However, if the Issuer of the selected certificates is not the same, the Revoke button will not be active.

    Note

    If the certificate(s) selected for Revoke is in use in the current configuration, a warning will be displayed to inform the administrator. It is important that in case a certificate is in use, it cannot be revoked. If the certificate in use is part of a multiple selection of certificates for the Revoke activity, none of the selected certificates will be revoked.

  • Delete: Delete the selected certificate. For details, see Procedure 11.3.8.4, Deleting certificates.

    Note

    It is possible to multi-select a number of certificates for the Delete activity. However, if the certificate(s) selected for Delete is in use in the current configuration, a warning will be displayed to inform the administrator that the group of certificates selected for deletion cannot be deleted. Note, that even if only one certificate is in use among the selected elements, none of the certificates will be deleted.

11.3.8.2. Procedure – Creating certificates

To create a certificate, complete the following steps.

  1. Select PKI > Edit Certificates from the menu and click Certificates.

  2. Click Generate, and fill the Generate CSR form.

    Creating a certificate

    Figure 11.18. Creating a certificate


    1. Enter a Unique Name that will identify the object containing the certificate and the key in MS. Note, that in case after filling in the Unique Name field, the Enter button is used, the value of the Unique Name field is also added to the Common Name field.

    2. Select the host from the combobox, who will be the owner of the certificate.

    3. If you want the certificate to be available on every site that is managed in MS, select Certificate available on all sites.

    4. Fill the Subject section of the request as appropriate. Into the Country field, enter only a two-letter ID (for example, US). Enter a name for the certificate into the Common name field. Note that in case the fields have been filled in at Site preferences, those values will automatically be offered here.

    5. Select the length of the key (1024, 2048, or 4096 bit).

      Note

      Longer keys are more secure, but the time needed to process key signing and verification operations (required for using encrypted connections) increases exponentially with the length of the key used. By default, 2048 bit is used.

      MC 2 can create only RSA keys, generating DSA keys is not supported.

      Warning

      If the certificates/keys have to be used on machines running older versions of the Windows operating system, using only 1024 bit long keys might be required, since these Windows versions typically do not support longer keys.

    6. Select the method (SHA256 or SHA512) to be used for generating the Signature digest (hash).

    7. By clicking on Extensions ..., the various purposes of the certificate can be specified. For details on X.509v3 extensions, see Appendix B, Further readings.

      Specifying extensions

      Figure 11.19. Specifying extensions


    8. After specifying all the required options, click OK.

  3. Navigate to the PKI management tab, and in the navigation window select the local CA to be used to sign the request (for example, MS_Agent_CA for transfer agents, and so on).

    Signing a certificate

    Figure 11.20. Signing a certificate


  4. Click on Sign. A window will be displayed listing the submitted but not yet signed certificate signing requests (CSRs). Note, that it is possible to use multi-select here. The list displays the distinguished name of the CSRs, this includes the various Subject fields (Country, locality, common name, and so on) specified when generating the request.

    Selecting the certificate to be signed

    Figure 11.21. Selecting the certificate to be signed


  5. Set the validity period (Valid after/Valid before dates) of the certificate. A pop-up calendar is available through the ... button. Alternatively, after setting the Valid after date, the Length field can be used to specify the length of the validity in days, automatically updating the Valid before field.

  6. By clicking on Extensions ..., various X.509 extensions can be specified. These extensions can be used to ensure in filters that only certificates used for their intended purpose are accepted.

    Note

    Note that although similar configration details can be defined when creating a certificate - and also different settings can be defined for each certificate, the settings defined here will overwrite any other configuration settings and only these settings will be applicable.

  7. Enter the password of the CA required for issuing new certificates, and click OK.

11.3.8.3. Procedure – Revoking a certificate

To revoke a certificate, complete the following steps.

  1. Select the certificate to be revoked.

    Note, that it is possible to multi-select a number of certificates for the Revoke activity. However, if the certificate has no Issuer, the Revoke button will not be active.

    Note

    It is possible to multi-select a number of certificates for the Revoke activity. However, if the Issuer of the selected certificates is not the same, the Revoke button will not be active.

    Note

    Note that if the certificate(s) selected for Revoke is in use in the current configuration, a warning will be displayed to inform the administrator. It is important that in case a certificate is in use, it cannot be revoked. If the certificate in use is part of a multiple selection of certificates for the Revoke activity, none of the selected certificates will be revoked.

    Revoking certificates

    Figure 11.22. Revoking certificates


  2. For general certificates, click on Revoke either on the PKI management or the Certificates tab. CA certificates can be revoked from either the PKI management or the Trusted CAs tab.

    Note

    Only certificates signed by local CAs can be revoked.

    Self-signed CA certificates cannot be revoked.

  3. Enter the password of the issuer CA. If the private key associated to the certificate is to be revoked as well, check the Archive CSR and private key checkbox. Click OK.

    Revoking the private key

    Figure 11.23. Revoking the private key


    Tip

    If the private key of a certificate has been compromised, the private key should be revoked along with the certificate. Generally it is recommended to generate new keys each time a certificate is refreshed.

  4. Following the Revoke of the certificate, the certificate will disappear from the lists of certificates on the Certificates tab, and will only appear on the PKI management tab, in the Revocations list of its CA.

  5. The CRL of the issuer CA is refreshed automatically.

  6. The revocation will be effective on the PNS hosts only when their CRL information is updated from MS. If MS is not configured to perform distribution automatically (or the update should be made available immediately), it can be performed manually through the PKI/Distribute Certificates menu item.

11.3.8.4. Procedure – Deleting certificates

To delete a certificate from the MS PKI, complete the following steps.

  1. Select the certificate you want to delete on the Certificates tab, and click Delete.

    Note

    It is possible to multi-select a number of certificates for the Delete activity. However, if the certificate(s) selected for Delete is in use in the current configuration, a warning will be displayed to inform the administrator that the group of CAs selected for deletion cannot be deleted. Note, that even if only one CA is in use among the selected elements, none of the CAs will be deleted.

  2. From the main menu, select PKI > Distribute Certificates.

11.3.8.5. Procedure – Exporting certificates

To export a certificate from the MS PKI, complete the following steps.

  1. Select the certificate to be exported on the Certificates tab.

    Note

    When only one certificate is exported, the name of the exported certificate file has to be defined during the export. When a number of certificates are exported at once with the help of multiple selection, the exported certificates will be named based on their unique names. If by the export of multiple certificates, private keys are also exported, passwords must also be generated. The passwords of the private keys will be saved in .txt files format into the same directory the certificates are exported to and will be named based on the unique name of the corresponding certificate they belong to.

  2. Click on Export.

  3. Select the directory to save the file(s) to. Specify the filename in case of single certificate export (as in case of exporting multiple certificates, the names are automatically created), and click OK.

    Exporting certificates

    Figure 11.24. Exporting certificates


    Note

    File extension is NOT added automatically to the filename.

    The file will be saved to the local machine, that is, the one that is running MC.

  4. Depending on the file format to be used, the part(s) to be saved can be specified. Naturally, only the parts that are available can be selected (for example, only the CSR or the key if the certificate has not been signed yet).

    Selecting certificate components to export

    Figure 11.25. Selecting certificate components to export


  5. If the private key is exported as well, in case of exporting one certificate, the key can be password-protected by specifying an Export password. In case more than one certificate is exported, and private keys are also selected for export, the passwords belonging to the private keys will be automatically generated into the same folders, where the certificates are exported to and will be saved into .txt files. The .txt files will be named after the unique name of the certificate they belong to.

11.3.8.6. Procedure – Importing certificates

  1. Click on Import on the Trusted CAs tab for importing a CA certificate, or on the Certificates tab for normal certificates.

    It is possible to import only one CA or certificate at once, and consequently, the Import into selected object can only be selected if at the time of the Import only one line is selected in the Trusted CA list or in the Certificates list.

    Note

    If the parts contained in the file are to be imported for being added to an existing certificate entity, select the given certificate entity before clicking on Import. This function is useful for importing certificates signed by external CAs.

  2. Specify the file format to be used, and select the file to be imported.

  3. Select the part(s) to be imported.

    Importing certificates

    Figure 11.26. Importing certificates


    Importing CA certificates

    Figure 11.27. Importing CA certificates


  4. There are two ways to handle the data imported from the file: creating a new entity or appending them to an existing one.

    Creating a new entity: Select the Import as new object radio button, enter a Unique name and you can also select the Owner host of the object if needed. This method is useful especially for importing the certificates of external CAs.

    Import parts to an existing certificate: It is possible to import the part(s) contained in the file into an existing certificate entity (that is, the one that was selected before clicking on the Import button). This method should be used when importing your certificates that were signed by an external CA, so the certificate is imported to the entity containing the private key and the CSR. Select the Import into selected object radio button.

  5. Enter the Export password if the private key is imported and the key has been password-protected.

  6. Check in the Certificate available on all sites checkbox if needed.

11.3.8.7. Procedure – Signing your certificates with external CAs

If you have an external CA to sign your certificates and you want to manage these certificates in PNS, complete the following steps.

Tip

The Import and Export operations provide a convenient way to handle certificates signed by external CAs. For details, see Procedure 11.3.8.6, Importing certificates and Procedure 11.3.8.5, Exporting certificates.

  1. Generate a private-public keypair and an associated CSR using the Generate button of the Certificates tab.

  2. Export this CSR into a file using the Export button.

  3. Have the CSR signed.

  4. If the CA approves your identity and signs the certificate, Import it to the PKI system of MS.

    Note

    Make sure the appropriate entity is selected (that is, the signed certificate to the proper CSR is imported) and the Import into selected object option is checked in.

  5. The certificate can now be distributed and used on your machines.

11.3.8.8. Procedure – Importing certificates with external private key

Purpose: Using certificates in PKI which have private key stored in external resource. External resource can be a cryptographic token or an external file in the local filesystem (a file that is not managed by the MS)

The file being imported contains only the public parts of the certificate, CA, or CSR. The corresponding private key will be accessed from an external resource, referenced by an URI (Uniform Resource Identifier). The URI is a string which defines the path to the external resource. Currently, the following URI schemes are implemented:

  • file: the URI starts with file: and encodes a path to a local file, which stores the private key in PEM format. (e.g. file:/var/keys/userkey.pem)

  • pkcs11: the URI starts with pkcs11: and encodes an identifier to a PKCS#11 object on a cryptographic token, e.g. a key in a HSM device. (PKCS#11 is a standard that defines a platform-independent interface to cryptographic tokens, HSMs and smart cards.) The format of the URI is defined by the PKCS#11 URI scheme specification, and may vary by the type/manufacturer of the token device.

    If the PKCS#11 token requires authentication (PIN code), this can be set within the URI, or the passphrase attribute on selecting the certificate. The latter has precedence, so the passphrase will overwrite the PIN code set in URI (if exists)

Note

Certificates with external key can be used only for TLS session authentication (Encryption Policies), and can not perform operations in PKI management, such as signing other entities, refreshing certificates, etc.

To import a certificate with external key, complete the following steps.

Steps: 

  1. Click on Import on the Trusted CAs tab for importing a CA certificate, or on the Certificates tab for normal certificates.

  2. Specify the file format to be used, and select the file to be imported. The file should contain the necessary public data, but not private key data.

  3. Select the public part(s) of the certificate to be imported.

  4. Check Private key and select External button to reference an external key.

  5. Enter the URI for accessing the corresponding private key to URI for external key field.

  6. There are two ways to handle the data imported from the file: creating a new entity or appending them to an existing one.

    Creating a new entity: Select the Import as new object radio button, enter a Unique name and you can also select the Owner host of the object if needed. This method is useful especially for importing the certificates of external CAs.

    Import parts to an existing certificate: It is possible to import the part(s) contained in the file into an existing certificate entity (that is, the one that was selected before clicking on the Import button). This method should be used when importing your certificates that were signed by an external CA, so the certificate is imported to the entity containing the private key and the CSR. Select the Import into selected object radio button.

  7. Check in the Certificate available on all sites checkbox if needed.

11.3.8.9. Procedure – Monitoring licenses and certificates

Purpose: 

The MS and PNS hosts monitor the validity of product licenses and certificates, and automatically send alert e-mails if any of them will expire soon. By default, the host sends e-mail alert to the administrator e-mail address specified during the installation, 14 days before the expiry.

The validity of the available product licenses (MS, PNS, CF, AS, NOD32) is checked once every day on each host that is managed from MS. The validity of CA certificates and certificates is checked only on the MSabbrev; host.

To configure the details of the certificate monitoring, complete the following steps.

Steps: 

  1. Add a new Text editor component to the host to edit the /etc/MSagent/expiration.conf file. For details on editing a file using the Text editor component, see Procedure 8.1.1, Configure services with the Text editor plugin.

  2. Configure the address of the mailserver, and the e-mail address of the recipients as needed. For details on the available parameters, see the manual page of expiration.conf.

  3. Commit and upload your changes.

  4. Execute the /etc/cron.daily/expiration_check.py --test-mail command to send a test e-mail.

Chapter 12. Clusters and high availability

12.1. Introduction to clustering

A cluster is a group of computers dedicated to perform the same task. These computers (referred to as nodes of the cluster) use the same (or very similar) configuration files (policies, packet filtering, and so on). The goal of clustering in general is to integrate the resources of two or more devices (that could otherwise function separately) for backup, high availability or load sharing purposes. In other words, clusters are computer systems in which more than one computer shares the tasks or the load in the network. A PNS cluster usually consists of a group of firewall hosts that maintain the same overall security policy and share the same configuration settings.

Basically there are two types of clusters. In a failover cluster if a machine breaks down, a spare computer is started immediately to ensure that the service provided by the computers is continuously available (see Section 12.2.1, Fail-Over clusters). Load balancing clusters are used when the traffic generated by the provided service is beyond the capabilities of a single computer (see Section 12.2.2, Load balance clusters).

Clustering provides the following advantages:

  • ensures continuous service and decreased downtime,

  • contributes to High Availability (HA),

  • assists to satisfy service level agreements, and

  • improves load balance in the system.

The following terms will be frequently used in this chapter:

Host

A single computer offering services to the clients.

Node

A single computer that belongs to a cluster, offering services to clients. Both nodes of the cluster offer exactly the same functionality.

Cluster

A (logical and physical) group of computers offering services to the clients. Clusters are made up of nodes. In MS, the nodes of a cluster are handled together: from the administration point of view a cluster behaves similarly to a single host.

12.2. Clustering solutions

12.2.1. Fail-Over clusters

The aim of failover clustering is to ensure that the service is accessible even if one of the servers breaks down (for example, because of a hardware error). In failover clusters only one of the nodes is functioning and carries the whole traffic, the other(s) only monitor the active node. In case of system failure resulting in a loss of service, the service is started on the other node in the system. In other words, if the active component dies the other node takes over all the services.

Note

The service fails over to the other component only in case of hardware failure, if you stop PNS no backup mechanism is initiated automatically.

Monitoring between the cluster nodes is realized with the help of Keepalived's VRRP health checks. For more information, see Section 12.5, Keepalived for High Availability.

The transfer of services can be realized using one of the following methods:

  • Transferring the Service IP address

  • Transferring IP and MAC address

  • Sending RIP messages

12.2.1.1. Service IP transferring

In this case the servers use a virtual (alias) IP address (called Service IP), clients access the service provided by the servers by targeting this Service IP. The Service IP is carried by the active node only. If the node providing the service (that is, the master node) fails, the Service IP is taken over by the slave node. As all clients send requests towards the Service IP, they are not aware of which device that address belongs to, and do not notice any difference when a takeover occurs.

When the cluster relocates the Service IP to the other node it sends a gratuitous ARP request message to the whole network informing the clients that the Service IP belongs now to a different node. As a result, the clients flush their ARP cache and record the new ARP address of the Service IP. (A gratuitous ARP request is an AddressResolutionProtocol request packet where the source and destination IP are both set to the IP of the machine issuing the packet and the destination MAC is the broadcast address ff:ff:ff:ff:ff:ff. Ordinarily, no reply packet will occur.)

Service IP takeover is the most frequently used takeover method for PNS clusters.

Note

Some clients may not take over the new Service IP address until the next automatic ARP cache flush, which causes certain delay in Service IP transfers in the system. The problem is that the ARP cache is refreshed relatively rarely, and it is not possible to notify the clients to update their ARP cache.

12.2.1.2. IP with MAC address takeover

In some systems, usually in large networks it is disadvantageous to modify IP address – Media Access Control (MAC) pairs, because certain routers do not refresh their ARP cache, causing problems in the network traffic. In this case the failover functionality is realized by taking over the Media Access Control (MAC) address.

In such systems, all nodes use the same fix IP and hardware MAC address in the network and the nodes are differentiated by the state of the servicing interface. The master (active) node has the interface in up state while the slave nodes' interfaces are kept down. If the service fails over to the other node, the interfaces get into up state. Client requests are serviced by the node having the interface in up state.

Transferring MAC address is beneficial if the resources need to be relocated very quickly.

Warning

Multiple interfaces with the same IP or MAC address connecting to a network as a result of a failed takeover can destabilize the network. Consequently, it is important to monitor the takeover process and to completely remove (for example, power off) the failed server from the network. It is the user's responsibility to devise an appropriate method for this, possibly through a notify script, such as power off via ILO/IPMI, or using a network-connected smart PDU. Furthermore, intentional switchovers must also be monitored as they will deactivate the slave as well. See Section 12.5, Keepalived for High Availability for details.

12.2.1.3. Sending RIP messages

In this case no Service IP is used. All nodes have their own IP addresses and the routing information is sent through Routing Information Protocol (RIP) messages using different metrics.

Note

RIP metrics are ranging from 1 to 6. You have to define the metrics for each node according to your network environment.

Routers in the network select the destination components based on these metrics. They send the traffic to the node with lower metrics value. If a node fails, it is sufficient to remove it from the network and traffic is transferred through the other nodes without any further interaction.

Note

The router mediating the client requests towards the firewall has to support RIP message transfer. Desktop clients and common server machines usually do not support RIP messages.

PNS uses the Sendrip software for this purpose.

12.2.2. Load balance clusters

In load balance configurations, all nodes of a cluster provide services simultaneously to distribute system load and enhance the overall quality of service. Clients access the service by targeting a single domain name, without knowing how many servers provide the actual service.

The amount of traffic handled by one node is determined by some logic. Currently PNS does not provide any built-in tool for defining such criteria, therefore an external device has to be used. This can be either the DNS server, or a dedicated load balancer.

12.2.2.1. DNS load balancing

DNS load balancing is based on the native behavior of name resolution, that is when the DNS server resolves a domain name into more than one IP address the client chooses one IP from the answers using the round robin algorithm. Though for the decision the client disregards the actual load of the server, the solution results in balanced load in the system. In this case the firewalls offer a non-transparent service, because the client targets the firewall itself.

12.2.2.2. Load balancing with external devices

It is possible to use load balancer devices to distribute the traffic between the nodes. In this case the balancing method can be configured on the load balancer device. Of course, load balancing solutions also offer a native failover solution. If one node stops working and the load balancing device notices that, it does not direct traffic to that node until it is functioning again.

Load balancer devices offer load balancing only from the point of the client, it has no influence on the proxy at the other side of the firewall — in such case line load balancing must be solved on the firewall. If you need to share a load from several directions (physical networks), separate load balancer devices are needed in each direction.

Note

The firewall has to have a separate load balancer device towards all connected interfaces.

From proxying point of view, all connections, and in case of multi-channel protocols, like FTP, all channels have to go through the same node.

Directing related channels to the same node

Figure 12.1. Directing related channels to the same node


The third party device added to the system must be able to direct multi-channel protocols through the same node.

12.2.2.3. Multicast load balancing

A simple load balancing solution is to assign a multicast MAC address to the Service IP. In this case the clients target the Service IP, and a hub or switch before the firewall hosts forwards all requests to the multicast MAC address, resulting in all nodes of the cluster receiving all packets sent to the Service IP. The IP addresses of the clients are distributed between the nodes using some logic (for example, one node serves only clients with odd, the other one clients with even IP addresses), and the packet filter of each node is configured to accept only the packets of the clients they are responsible for.

Note

It is important that if in such a scenario one of the nodes fails, the remaining nodes have to take over the clients served by the failed node. This can be accomplished for example by using Keepalived virtual IPs and services.

12.3. Managing clusters with MS

The nodes of a cluster have identical configurations, only a few parameters are different. When configuring clusters, all nodes are configured simultaneously, as if the cluster were a single host.

A cluster in MC

Figure 12.2. A cluster in MC


For each parameter that is different on the nodes of the cluster, links have to be used. It is also possible to link to a property of the cluster, in this case the link will be evaluated to a different value on each node. That way when the configuration is uploaded, each node will receive a configuration file containing the values relevant for the node.

Any parameter can be used as a property; usually parameters like the IP addresses of the interfaces are properties. New properties can be added any time to the cluster, not only during the initial configuration.

Naturally, not all links used in a cluster have to be links to cluster properties, regular links can be used as well. However, keep in mind that links to cluster properties are resolved to the corresponding property of the particular node. For example, a link to the Hostname property of a cluster is resolved on each node to the hostname of the node (for example, to node_1 on the first node, and so on).

Note

The PKI of the site considers the cluster to be a single host, there is no difference between the individual nodes.

As a result of using properties, adding new nodes to a cluster is very easy, since only the properties have to be filled with values for the new node.

When uploading configuration changes, or viewing and checking configurations, you can select on which node the operation shall be performed.

Controlling a service (for example, restarting/reloading) is possible on all nodes simultaneously, or only on the nodes specified in the selection window.

Selecting the target node

Figure 12.3. Selecting the target node


Status indicator icons on clusters behave identically to hosts, except that a blue led indicates a partial status, meaning that the nodes of the cluster are not all in the same state (for example, the configuration was not successfully uploaded to all nodes).

When configuring rules for PNS clusters, use links to the interfaces. From the clients' point of view this makes no difference, as the clients do not target the IP of the PNS host.

For non-transparent services, the rule must use the Service IP (that is, a link to the Service IP), because that is where the clients will send their requests to.

12.4. Creating clusters

When configuring a new cluster, there are several distinct steps that have to be completed. An overview of the general procedure is presented below. The main tasks are to create and configure the cluster nodes; to configure Keepalived (required only for failover clusters and certain load balancing solutions); and finally to create the policies, services on the cluster.

First the new cluster has to be created in MC. This can be either a cluster created from scratch, or (optionally) an existing host can be converted into a cluster. In both cases the initial cluster has only a single node, the additional nodes have to be added (and bootstrapped) manually. Bootstrapping a cluster node is very similar to bootstrapping a regular host. It is important to create properties for the parameters that are different on each node (for example, hostname, IP address, and so on) and use links during configuration when referring to these properties.

In case of failover and multicast load balancing clusters, the Keepalived component also has to be installed and configured. For load balancing clusters where the load balancing is performed by an external device (that is, a load balancer, DNS server, and so on), this external device also has to be configured. Configuring Keepalived has two main steps, first the communication between the nodes has to be configured, then the Keepalived virtual IPs and services that are taken over when a node fails have to be created (see Section 12.5, Keepalived for High Availability for details).

After completing the above procedure, the cluster-specific configuration of the system is finished — later steps can be performed identically to managing the policies of regular hosts.

The individual steps of the above procedure are described in the following sections in detail.

Note

The procedures in the subsequent sections describe the configuration of a PNS firewall cluster. Although this is the most common scenario, other components of the PNS Application Level Gateway System (for example, CF, AS) can also be clustered.

Warning

When creating a PNS cluster, the MS managing the cluster must be on a dedicated machine, or on a PNS host that is not part of the cluster. MS cannot be clustered.

12.4.1. Procedure – Creating a new cluster (bootstrapping a cluster)

To create and bootstrap a new cluster, complete the following steps:

Note

As an alternative to creating a cluster and bootstrapping its first node, an existing PNS host can also be converted into a cluster. For details, see Procedure 12.4.4, Converting a host to a cluster.

  1. Select the site that will include the new cluster from the configuration tree, and click on New Host Wizard in the Management menu.

  2. Select the Cluster minimal template and click on Forward to start bootstrapping the first node of the cluster. (Bootstrapping cluster nodes is very similar to bootstrapping individual PNS hosts. For more information see Chapter 4, Registering new hosts.)

    Bootstrapping a cluster

    Figure 12.4. Bootstrapping a cluster


  3. Provide a name for the cluster (for example, Demo_cluster), as well as an Agent Bind IP address, an Agent Bind IP port and a Hostname (for example, Demo_cluster_node1) for the first node of the cluster. The node will accept connections from the MS agents on the specified Agent Bind IP/Port pair.

    Entering basic parameters

    Figure 12.5. Entering basic parameters


  4. The rest of the bootstrapping process is identical to bootstrapping a normal PNS host, that is, create a certificate for the cluster, supply a one-time-password, and so on. For more information see Chapter 4, Registering new hosts.

  5. After bootstrapping the first node of the cluster, complete the following procedures as needed:

12.4.2. Procedure – Adding new properties to clusters

As properties have to be used for all parameters that are different on each node, it is recommended to create all properties before adding the additional hosts to the cluster. Naturally, this is not required; properties can be defined any time. To add a new property to the cluster, complete the procedure below.

  1. Click the New property button on the Nodes tab of the cluster to define a new property.

    Adding a new property

    Figure 12.6. Adding a new property


  2. Enter a name for the new property, and select the type and subtype of the property.

    Defining a new property

    Figure 12.7. Defining a new property


    The possible property subtypes are the following.

    • ip_address

    • port

    • ip_netmask

    • interface_name

    • hostname

    You can set initial values for the properties as well.

    The new property is added to all nodes automatically. (Properties can be manipulated both in Nodes and Properties view.)

  3. Set the value of the new property for all nodes separately by clicking the Edit button.

12.4.3. Procedure – Adding a new node to a PNS cluster

  1. Click the New node button on the Nodes tab of the cluster.

    Adding a new node to a cluster

    Figure 12.8. Adding a new node to a cluster


  2. Set the properties of the new node in the appearing window. Enter hostname, agent bind address and bind port, and any other properties that have been added to the cluster.

    Configuring the properties of the new node

    Figure 12.9. Configuring the properties of the new node


    Tip

    It is recommended to use the default port setting.

    Do not forget to check the Commit and activate checkbox to automatically commit the changes and connect to the new node. If this checkbox is not selected when the new node is created, the configuration must be committed into the MS database manually. Also, the node cannot be connected automatically, only through a recovery connection (see Procedure 13.3.4, Configuring recovery connections).

    If you create a failover cluster, usually the second node is configured to be the slave node.

  3. Enter the one-time password and click the OK button to build up the connection.

    Entering the one-time-password

    Figure 12.10. Entering the one-time-password


    Details on the background procedures are provided in standalone text. Save the output with the Save button so that it can be analyzed later if needed. The text window shall look similar to this.

    Note

    You can check the status and connections of cluster nodes by selecting the Connections item in the Management menu.

    After bootstrapping a cluster and adding new properties, you can freely add the necessary components and configure the nodes according to your needs. Basically, the configuration procedure is similar to a PNS host configuration. When configuring clusters using the Keepalived component, proceed to Procedure 12.5.3.1, Configure Keepalived.

12.4.4. Procedure – Converting a host to a cluster

Existing PNS hosts can also be converted into a cluster relatively easily. In this case the PNS host will be converted into a node of the new cluster.

Warning

When a host is converted into a cluster, it retains all parameters that were set explicitely on the host. These parameters have to be replaced with links manually if needed. Typically, properties have to be created and links used for the hostname, IP address , and the interface parameters.

  1. Select the host you want to convert to a cluster.

  2. Select the Convert Host to Cluster in the Management menu.

  3. Enter a name for the cluster.

12.5. Keepalived for High Availability

12.5.1. Functionality of Keepalived

A new and modern approach for Virtual IP address transition between cluster nodes is available in the Management Console (MC), namely the Keepalived solution.

Keepalived offers a framework not only for enabling high availability but for load balancing as well. Its load balancing solution relies on the Linux Virtual Server (IPVS) kernel module, while its HA solution is based on Virtual Router Redundancy Protocol (VRRP). VRRP protocol provides automatic assignment of available IP routers to ensure resilient routing paths.

12.5.2. Prerequisites for configuring Keepalived

  • The Keepalived component can only be added to cluster configuration.

  • Install the Keepalived package in cluster hosts as follows:

    apt install keepalived 

    It is available in Ubuntu Bionic main repository.

12.5.3. Configuring Keepalived

To configure a Keepalived HA cluster, the suggested steps are as follows:

12.5.3.1. Procedure – Configure Keepalived

  1. Create cluster configuration with configured network interfaces.

  2. Add Virtual IP addresses to the Networking component via setting the interface type to keepalived.

    Type-specific parameters are: 'Address' and 'Netmask'. They are the same as in case of 'static' interface type.

    Setting interface type to 'Keepalived' in the Networking component

    Figure 12.11. Setting interface type to 'Keepalived' in the Networking component


    Note

    If the type of the interface is keepalived, it must be an alias interface of an existing interface.

    If the type of the interface is keepalived in the Networking component, it cannot be disabled. If the Virtual IP Address shall be disabled, it shall be set in the configuration of the Keepalived component.

    When the Networking component is restarted, the Virtual IPs are dropped from the existing interfaces. To avoid this, add the keep-configuration interface option with the static value to those used physical interfaces which have keepalived alias interfaces on them. 

    Also, if the static value is set, then after any change made on these interfaces, the old values will not be removed at the restart of the Networking component, but new values will be added (for example, IP, subnet). Temporarily turning the keep-configuration parameter to no and restarting the node is not advised, because the networking restart will remove all settings added by other sources too. It is recommended to reboot the node after these values have been changed, or to configure these changes manually, and skip restart. For details on configuring interface options, see Procedure 5.1.6.3.1, Configuring interface parameters.

  3. Add keepalived component to cluster configuration.

  4. Select the cluster in the configuration tree and click the New button below the Components in use subwindow on the Cluster tab to add the Keepalived component.

  5. Choose the Keepalived default template and change the component name, if needed.

    The Keepalived component appears in the configuration tree.

  6. Set the configuration options for the Keepalived component under the Configuration tab.

    The configuration options for Keepalived component under Configuration tab

    Figure 12.12. The configuration options for Keepalived component under Configuration tab


    The configuration options are as follows:

    • Binding interface:

      The name of the interface, where Keepalived binds on.

    • Node IP address:

      This option must be a linked cluster property of an IPv4 or IPv6 address, which is used as source address in VRRP packets and also for unicast peer IP purposes.

      Example:

      A firewall cluster with three nodes and with their Node IP address cluster properties' value:

      • node-1: 10.0.0.1

      • node-2: 10.0.0.2

      • node-3: 10.0.0.3

      In this case, for node-2, the unicast source IP option for Keepalived is as follows: 10.0.0.2. The unicast peer IP addresses are as follows: 10.0.0.1 and 10.0.0.3

    • Node priority:

      It defines the VRRP priority of the cluster nodes.

      Note

      It can be set to the same value for all nodes, or linked via cluster property to be different on all nodes.

    • Default state:

      The start-up default state of the nodes.

      Note

      It can be set to the same value for all nodes, or linked via cluster property to be different on all nodes.

      Note

      In case of non-preemptive configuration, the state for all nodes shall be BACKUP.

    • Virtual Router ID:

      The value for the VRRP Virtual Router Identifier (VRID).

      Note

      It can be set to the same value for all nodes. If it is linked via cluster property, it can be used for grouping nodes.

    • Debug level:

      This option sets the debug level of the Keepalived VRRP module between the values 0 and 4.

    • Preemptive:

      This option enables or disables VRRP RFC preemption. If it is disabled, it allows the lower priority machine to maintain the master role, even when a higher priority machine comes back online.

    • Do not track primary interface:

      It ignores VRRP interface faults. It is useful for cross-connect VRRP configurations.

    • Check unicast source IP:

      It checks whether the source address of a unicast packet is a unicast peer.

    • Set shared key:

      It is the authentication password used in VRRP packets.

      Note

      Keepalived truncates passwords longer than 8 character.

    • Virtual IP Addresses:

      This table contains the configured Virtual IP Addresses. The addresses shall be in the order of configuration precedence. The table can contain only linked IP addresses, which are configured in Networking, on interfaces, which have the type value Keepalived.

      Note

      Because of the limitations of the VRRP protocol, the first VRRP packet can contain the maximum of 20 IP addresses. The rest of the Virtual IP Addresses are defined later via extra packets.

      Note

      In case of mixing IPv4 and IPv6 Virtual IP Addresses there is a limitation of the VRRP protocol: In the first VRRP packet, the IP addresses must be in the same address family as that of the first item. The IP addresses with different address families are defined later via extra packets.

  7. Configure the following options in Keepalived component, under Service failover tab.

    Configuring Keepalived component under Service failover tab

    Figure 12.13. Configuring Keepalived component under Service failover tab


    • Service :

      In this table systemd service actions can be configured, which can be executed after the change of the state on the new master or backup nodes.

      Service actions are: start, stop, restart and reload.

      Note

      Consider disabling any of the listed systemd services to avoid the situation, when unrequested actions take place, in some cases even parallelly. For example, when the service is running on both nodes instead of running on only one node.

      Note

      Service names are suggested in the drop-down menu, according to the available modules, added in the cluster configuration. Free-text service names can also be added.

      Note

      It is not necessary to set action both for the master and the backup node. It can be set only for one of them.

    • External failover notification scripts:

      User scripts can be given or selected from a list, which is executed on the hosts, when the change of the state has been executed.

      Warning

      There are strict requirements with regards to the rights of the selected script files. Without the necessary rights no action will be executed. The file must be owned by the root user and group, and there must not be write right for any other user. Neither shall be there writing right for any other level of the file path, except for the root user and group.

      Note

      In the listed drop-down menus a file can be selected, which is managed in a Freetext plugin for this Cluster.

  8. Configure the following items in Keepalived component's E-mail notification tab.

    Configuring Keepalived component under E-mail notification tab

    Figure 12.14. Configuring Keepalived component under E-mail notification tab


    • SMTP server:

      The IP address or domain name of SMTP server to use.

      Note

      Optional port parameter can be added, the default value is 25.

    • Notification email from:

      The information in the header field on the address the e-mail is received from.

    • SMTP connect timeout:

      The SMTP server connection timeout in seconds.

    • E-mail notification recipients:

      The list of e-mail addresses, where the notifications on the change of state shall be sent.

    Tip

    Maintenance tip: if the firewall administrator wants to manually change the state of the cluster nodes ("switch over") it can be done as follows:

    • Restart (or in case of preemptive configuration, stop) the Keepalived service with Control Service on those nodes, which are not meant to be master nodes.

    Also consider Virtual Router IDs: if not all nodes have the same ID, restart (or stop) Keepalived service only on those nodes, which have the same ID.

    Note

    After completing Keepalived configuration, Management Access component may be Invalidated, if Keepalived packet filter rule entry has been added/changed. The created rule allows VRRP traffic from all possible node IP addresses to enter the cluster hosts. The Virtual Router ID helps to identify the relevant VRRP packets in case of multiple node groups.

12.5.4. Configuration examples and best practices for Keepalived configuration

12.5.4.1. Procedure – Simple Cluster with 2 nodes

  1. Link the Node IP Address to the proper cluster property, which contains the IP addresses to be used for Keepalived VRRP traffic.

  2. Set Node Priority and Virtual Router ID to a fixed value, for example, to 100.

  3. Set the Default state to the value BACKUP and do not set Preemptive option (recommended steps).

  4. Enter an 8-character-long random string at Set Shared Key option.

  5. Link Virtual IP addresses, which have been configured in the Networking component.

The basic configuration is now done, it can be uploaded and Keepalived can be started.

12.5.4.2. Procedure – Testing or Pilot node

If there is an extra node in the cluster, for testing or piloting purpose, which is not planned to be used in the live keepalived configuration, then 2 solutions are suggested:

  1. Do not upload Keepalived configuration and do not start Keepalived service on that specific node.

    This trivial solution needs the active attention of the firewall administrator, when uploading Keepalived configuration or managing the service.

  2. Create and link a new cluster property with keepalived_router_id type.

    Set the value of the new cluster property to the same for the actively used nodes and to a different value for the testing nodes.

    In this case, the testing nodes will be in a different VRRP group and they will not ever get MASTER state in the cluster.

12.5.4.3. Procedure – Multiple backup nodes

It is possible to have multiple backup nodes in the same VRRP group. If the nodes do not have the same hardware or the nodes differ in computing speed, it is suggested, to set the Node priority via linking keepalived priority-typed cluster property.

    12.5.4.4. Procedure – Multiple VRRP groups in the same cluster

    If there are at least 2N nodes in the cluster, where N > 2 and the nodes are logically paired.

    Multiple nodes in the same cluster

    Figure 12.15. Multiple nodes in the same cluster


    1. Create a cluster property with keepalived_router_id type, and the value of the property shall be the same for the nodes, those are paired to each other.

    2. Also create a cluster property per Virtual IP Address, and set its value to the same IP address per host group. This cluster property shall be linked in the Networking component, when adding Keepalived type of interface.

    12.5.4.5. Procedure – Managing individual OpenVPN tunnels

    It is possible to reference any individual OpenVPN tunnel handled by keepalived as an openvpn@tunnelname.service.

    OpenVPN tunnel service

    Figure 12.16. OpenVPN tunnel service


      12.6. Availability Checker

      Having the availability status of some pre-selected target addresses can be advantageous during performing stateful failover HA functionality. Knowing beforehand that a certain address is not available and new connections shall not be attempted to them, can eliminate waiting time. This functionality is implemented in a service daemon named failcheckd which can monitor IP addresses with different methods and make this availability information visible for the firewall.

      12.6.1. Prerequisites for configuring the Availability Checker plugin

      Make sure that the failcheckd service is enabled:

      systemctl enable failcheckd.service

      12.6.2.1. Procedure – Configuring the Availability Checker

      Complete the following steps in order to add Availability Checker component to the configuration.

      1. Select the host in the configuration tree and click the New button below the Components in use subwindow on the Host tab to add the Availability Checker component.

      2. Choose the Default template and change the component name, if needed.

      3. The Availability Checker component appears in the configuration tree.

      4. Set the configuration options for the Availability Checker component under the Configuration tab.

        Configuration options for Availability Checker

        Figure 12.17. Configuration options for Availability Checker


        New, monitored target addresses can be added as new checks, specifying the followings:

        • the check method (Type)

        • host address

        • port

        • check interval

        • response timeout

        • other, method-dependent options

        Apart from the ping, TCP, and HTTP methods, there is a custom type method which can use any executable program's return value. The returned values shall be set so, that the value zero shows success, the other values represent failure. Programs used in custom checks, get the Response timeout value as the value of the --timeout RESPONSE_TIMEOUT first command line parameter. Executable programs are terminated with the SIGTERM signal in 5 seconds after the timeout value has elapsed as set in the Response timeout parameter.

        Check details

        Figure 12.18. Check details


      5. Check the Status tab. Following the upload of the plugin configuration and the restart of the service, the Status tab shows the actual status of the checks. The status of each target address comes from the aggregated status values of the checks of the certain address.

        Status of checks

        Figure 12.19. Status of checks


        Note

        The auto refresh time of the Status tab can be globally configured in the Program status section of Edit menu in Preferences... option.

      Chapter 13. Advanced MS and Agent configuration

      The Management Server (MS) is the central component of the PNS Management System. It governs all configuration settings, manages the firewall services through agents and handles database functions.

      MS provides a tool for the complete control and maintenance of the PNS firewalls entirely. You can create new firewall configurations, and navigate them to the firewall nodes. MS stores these configurations in an associated XML database making them available for later administrative operations.

      Communication with the PNS firewall software is realized by the Transfer Agent responsible for accepting and executing configuration commands.

      PNS components communicate using agents

      Figure 13.1. PNS components communicate using agents


      For further information on MS and the basic architecture, see Chapter 2, Concepts of the PNS Gateway solution.

      To modify firewall settings you need to carry out the following procedure regardless of which component is configured.

      1. Make the necessary changes in a component's configuration.

        Changes can be undone with the Revert option as long as they are not committed to the MS database.

      2. Commit the new configuration to the MS database.

        The MS host stores the modified information in its XML database. Remember to commit the changes before leaving the component.

        You can always view the new configuration and compare it with the current firewall configuration with the help of the View and Check options, respectively.

      3. Upload the configuration to propagate the changes from the MS database down to the firewalls(s).

        During this process the MS converts the configuration data to the proper configuration file format and sends them to the transfer agents on the firewall nodes.

      4. Reload the altered configuration or restart the corresponding services to activate the changes.

        Typically, reloads or restarts are performed after finishing all configuration tasks with the various service components.

      For more details, see Chapter 3, Managing PNS hosts.

      13.1. Setting configuration parameters

      MS configuration is realized by setting the appropriate parameters of the Management server component.

      The Management server component

      Figure 13.2. The Management server component


      The following parameters can be configured.

      Parameter name   Description
      auth Authentication settings handling user data and passwords
      backup Backup settings configuring backup
      bind Listen address for GUI connections setting connection between MS and MC
      connection Connection settings for agents defining connection parameters
      database Database settings defining saving to disk
      diff DIFF generator settings commanding configuration check
      http http proxy for CRL settings defining proxy address
      log Log settings configuring logging parameters
      ssl SSL handshake settings configuring SSL settings for MS and agent connection

      Table 13.1. MS configuration parameters


      By using global settings it is possible to apply default values to the parameter set.

      Note

      It is recommended to use the global settings when no special configurations are needed.

      Different configurations are possible for the following subsystems:

      • configuration database,

      • key management,

      • and transfer engine.

      13.1.1. Configuring user authentication and privileges

      With the Authentication settings (auth) parameter you can configure access to MS.

      MS authentication settings

      Figure 13.3. MS authentication settings


      Users are listed in the main textbox. You can perform the following tasks:

      Note

      Only the admin user can delete users, or modify the password and privileges of another user.

      13.1.1.1. Procedure – Adding new users to MS

      1. Navigate to the Management server component of the host running MS, and select the auth parameter from Global parameters.

      2. Click New to add new users to the system.

      3. Enter username and password in the opening window.

        Adding a new MS user

        Figure 13.4. Adding a new MS user


      4. Confirm password.

      5. Click OK, commit and upload your changes, and reload the Management server component.

      13.1.1.2. Procedure – Deleting users form MS

      Note

      Only the admin user can delete users, or modify the password and privileges of another user.

      1. Navigate to the Management server component of the host running MS, and select the auth parameter from Global parameters.

      2. Select the user you want to delete and click Delete.

      3. Commit and upload your changes, and reload the Management server component.

      13.1.1.3. Procedure – Changing passwords in MS

      Note

      Only the admin user can delete users, or modify the password and privileges of another user.

      1. Navigate to the Management server component of the host running MS, and select the auth parameter from Global parameters.

      2. Select the username whose password you want to change.

      3. Click Edit.

      4. Enter the current password and a new password in the opening window.

      5. Confirm the new password.

      6. Click OK, commit and upload your changes, and reload the Management server component.

        Change MS user password

        Figure 13.5. Change MS user password


      13.1.1.4. Configuring user privileges in MS

      In order to help configuration auditing and the general process of PNS administration, MC users can now have different access rights. That way different administrators can have different responsibilities — PKI management, log analysis, PNS configuration, and so on.

      Note

      Only the admin user can delete users, or modify the password and privileges of another user.

      13.1.1.4.1. Procedure – Editing user privileges in MS

      Note

      Only the admin user can delete users, or modify the password and privileges of another user.

      1. Navigate to the Management server component of the host running MS, and select the auth parameter from Global parameters.

      2. Select the username whose privileges you want to edit.

      3. Click Set rights.

        Note

        To change the password of the user, click Edit.

      4. Select the privileges you want to grant to the user. A user can have none, any, or all of the following privileges:

        • Modify configuration: Modify and commit the configuration of the hosts. The user can perform any configuration change, and commit them to the MS database, but cannot activate the changes or control any services or components.

        • Control services: Start, stop, reload, or restart any instance, service, or component. This right is required also to upload configuration changes to the hosts.

        • PKI: Manage the public key infrastructure of PNS: generate, sign, import and export certificates, CAs, and so on.

        • Log view: View the logs of the hosts.

        To create a 'read-only' user account for auditing purposes, do not select any privileges.

        To create a user account with full administrator rights, select every privilege.

      5. Click OK, commit and upload your changes, and reload the Management server component.

        Edit user privileges

        Figure 13.6. Edit user privileges


      13.1.1.5. Configuring authentication settings in MS

      Users connecting to MS using MC must authenticate themselves. The following authentication methods are available:

      • Local accounts: MS stores the usernames and passwords in a local database. This is the default authentication method.

      • Local accounts and AS authentication: MS stores the usernames locally, but receives the authenticattion of users against a AS instance. All users successfully authenticating against AS and having a local account can connect to MS.

      13.1.1.5.1. Procedure – Modifying authentication settings

      1. Navigate to the Management server component of the host running MS, and select the auth parameter from Global parameters.

      2. Select the desired authentication method in the Authentication method field.

      3. If you selected Local accounts and AS authentication, you have to configure access to AS in the AS configuration section.

        Note

        Using these authentication methods requires an already configured AS instance. See Chapter 15, Connection authentication and authorization for details on using and configuring AS.

        Enter the IP address or the hostname of the Authentication Server into the Provider host field. By default, AS accepts connections on port 1317.

        Select the certificate that MS will use to authenticate itself from the Certificate field.

        Select the CA group that contains the CA that issued the certificate of AS from the CA group field. MS will use this group to verify the certificate of AS.

      4. If you are running more than one authentication backend (more than one AS instances), create a new router in the Authentication server MC component that will direct the authentication requests coming from MS to the appropriate AS instance.

        Add a new condition to the router, and enter Authentication-Peer into the Variable field, and MS into the value field.

        For details on configuring AS routers, see Section 15.3.1.2, Configuring routers.

        Note

        MS sends also the username in the authentication requests. This can be used to direct authentication requests to different AS instances based on the username.

      5. Click OK, commit and upload your changes, and reload the Management server component.

      13.1.2. Configuring backup

      With the Backup settings (backup) parameter you can define the automatic MS database backup method. You can enable automatic backup and then determine the base time for the first backup and also the interval between all subsequent backup processes. Additionally, you can define how many database backup copies are stored. Alternatively, you can create scripts to handle the backup tasks.

      Backups are stored in the /var/lib/vms/backup directory of the MS host. The name of the backup file is MS-backup-<timestamp>.tar.gz

      Warning

      If you do not enable automatic backup you have to save the database manually, otherwise all database information might get lost.

      13.1.2.1. Procedure – Configuring automatic MS database backups

      1. Select Enable automatic backup.

      2. Enter Base time for the first backup.

        Use hours:minutes format, for example, 08:00.

      3. Define the Interval between the backups.

        Use hours:minutes format, for example, 00:30. In most cases it is sufficient to backup once or twice daily.

      4. Select the number in Keep generation field to define how many backup records are stored.

        The default value is 10, meaning that only the last 10 database backups are available. The allowed values are ranging from 1 to 100.

        The more backups you store, the more space it reserves. Since one record is only 1-2 MB you can keep 20-50 records if needed without overloading the system.

      5. (OPTIONAL)

        Tick Use external script and give the path of the desired script if you want to use a special script for the backup.

        Tip

        Using an external script is a good way to copy the backups to an external host, for example, a backup server.

      Maintenance of database backup

      Figure 13.7. Maintenance of database backup


      13.1.2.2. Procedure – Restoring a MS database backup

      The following steps describe how to restore an earlier MS database.

      Warning

      Restoring an earlier database will delete the current database, including the configuration of every host, as well as any certificates stored in the PKI system. Any modification and configuration change performed since the backup was created will be lost.

      1. Copy the database backup archive file to the MS host.

      2. As root, issue the /usr/share/MS/MS-restore.sh <backup-file-to-restore> command.

      3. Follow the on-screen instructions of the script. When the restoring process fails, the usr/sbin/MS-integrity can fix the database backup archive file. To accomplish this, complete the following steps.

        1. Create a temporary working directory

          mkdir /tmp/vms-backup

        2. Unpack the archive file

          tar -zxf <backup-file-to-restore> -C /tmp/vms-backup

        3. Try to recover the database

          /usr/sbin/vms-integrity -r -d /tmp/vms-backup

        4. Check the recovered database

          /usr/sbin/vms-integrity -d /tmp/vms-backup

        5. Pack the database

          tar -C /tmp/vms-backup -czf <fixed-backup-file> .

        6. Delete the working directory

          rm -rf /tmp/vms-backup

      4. Login to MS using MC. If you have reinstalled MS, use the username and password you have provided during the reinstallation.

      5. If the restored database has to be upgraded, MC displays a list of the components to be upgraded. Click Convert.

      6. Select PKI > Distribute Certificates. Note that key distribution will fail on every host except on the MS host. This is normal.

      7. Upload the configuration of the MS host.

      8. Restart at least the Management Server component on the MS host. This will terminate your MC session.

      9. Relogin with MC and check your restored configuration.

      10. Upload and restart any component as needed. You may also need to redistribute the certificates.

      13.1.3. Configuring the connection between MS and MC

      With the Listen address for GUI connections (bind) parameter you can configure the connection between the MS and the MC. You need to give the bind address and the bind port to define where to listen for the GUI connection.

      13.1.3.1. Procedure – Configuring the bind address and the port for MS-MC connections

      1. Provide the bind address. You have the following alternatives to define the bind address.

        • Enter the IP address manually.

          Configuring IP address manually

          Figure 13.8. Configuring IP address manually


        • Select the needed IP address from the drop-down menu.

          The menu shows the available IP interface addresses.

          Note

          Note that in case the IP address changes for some reason, you need to modify this data manually since changes are not propagated automatically.

        • Use variables.

          Using variables for IP address configuration

          Figure 13.9. Using variables for IP address configuration


          MS resolves variables during configuration generation (view, check and upload processes). For more information on using variables, see Chapter 3, Managing PNS hosts.

        • Create link to an IP address.

          Note

          It is recommended to give the address this way so that future address changes will have no effects on the operability of the connection.

          1. Procedure – Using linking for the IP address

          1. Click the link icon.

          2. Select the link target in the opening window.

          3. Click OK.

            If you click Unlink and Unlink as value options you delete the existing links. If you choose Unlink it ceases the link connection totally, meaning that the link field is left empty while Unlink as value deletes the link but leaves the target IP address in the field which will then behave as a manually added address.

      2. Provide bind port. You can define bind ports similarly to bind address.

        Setting the bind port for MC connection

        Figure 13.10. Setting the bind port for MC connection


        • Enter the port number manually.

        • Use variables.

        • Create link to a port

      13.1.4. Procedure – Configuring MS and agent connections

      With the Connection settings for agents (connection) parameter you can determine which bind address and port you want to use for the connection between MS and the agents. You can set awaiting time and the number of retry can be given too.

      Defining bind address and ports for the agents is done similarly to defining address and ports between MS and MC. By default, the MS initiates the connection, but if you give MS port and address for the connection parameter at the agent's side, the agent can also establish the connection. In that case, the same port and address should be set for this parameter too.

      Agent connection setting

      Figure 13.11. Agent connection setting


      1. Define bind address. The bind address can be defined manually, selected from the available interface addresses, or using variables or links (these possibilities are described in Section 13.1.3, Configuring the connection between MS and MC in detail).

      2. Set waiting time and the number of retries. These configure how long MS waits for live communication with the agents and in case the connection cannot be set up, how many times MS attempts to connect to the agents. Use the Wait and Retry fields.

        Note

        If you give too low value for waiting time or too high for retry you may overload the system and cause unnecessary network traffic. Default values are optimal in most cases.

        Setting waiting time and retry times

        Figure 13.12. Setting waiting time and retry times


      13.1.5. Procedure – Configuring MS database save

      With the Database settings (database) parameter, it can be configured how frequently the MS database is saved to disk in both idle and active modes.

      1. Select the appropriate number in the Idle threshold field to determine the interval between two database backups when the MS XML database is not in use, that is, no changes are committed.

        Give values in seconds. The default value is 10. All values are allowed.

      2. Select the appropriate number in the Busy threshold field to determine the interval between two database backups when the MS XML database is used and changes are committed.

        Give values in seconds. The default value is 120. All values are allowed.

        Note

        Since the XML database is constantly updated when the MS is in use, it is recommended to keep busy threshold value not too low to decrease system load. On the other hand, when choosing high threshold values it is important to bear in mind that during any system breakdown or power outage data might get lost because the database is kept only in the memory and not saved to disk.

        XML database settings

        Figure 13.13. XML database settings


      3. By default, MS sends warnings a week before a certificate expires. This value can be modified here.

      13.1.6. Setting configuration check

      With the DIFF generator settings (diff) parameter, a command can be defined, to be executed with the Check configuration button, when the database on the host is wished to be compared with the MS configurations.

      Enter the full path of the command.

      The default command is /usr/bin/diff. You can use special commands, parameters or your own scripts for the configuration check, if needed.

      13.1.7. Configuring CRL update settings

      With the http proxy for CRL settings (http) parameter you can give the URL of the proxy you want to use to retrieve the CRL from.

      Enter the URL according to your proxy settings, for example, http://proxy.example.com:3128/.

      Setting an HTTP proxy for downloading CRLs

      Figure 13.14. Setting an HTTP proxy for downloading CRLs


      13.1.8. Procedure – Set logging level

      With the Log settings (log) parameter you can determine which type of messages should be logged. You can apply logtags to enable advanced message filtering and also to determine where the messages should be logged to.

      1. Select the desired level of logging.

        The default value is level 3 which means that all important events are logged but detailed debug information is not. The higher the value the more information is logged. The levels from 0 to 10 are allowed.

      2. (OPTIONAL)

        Give log specification if you want to fine-tune for which logtags what level of messages are logged, for example, core.debug: 2, session.error:8,*. accounting:5..

      3. (OPTIONAL)

        Tick Syslog if you want to log to syslog, otherwise the messages are logged to the standard output (STDOUT).

        Tip

        Logging on the console might be useful during troubleshooting.

      4. (OPTIONAL)

        Tick Tags if you want to log the logtags as well.

        Tip

        This is beneficial if log messages need to be searched and analysed, but it requires more disk space.

        Settings of MS logging method

        Figure 13.15. Settings of MS logging method


      13.1.9. Procedure – Configuring SSL handshake parameters

      With the SSL handshake settings (SSL) parameter the certificate verification parameter and other handshake-related information can be set.

      Advanced settings for SSL connection

      Figure 13.17. Advanced settings for SSL connection


      1. Select verification level in the Verify depths field to decide how many levels are verified in the certificate hierarchy.

        Values from 0 to 100 are allowed.

      2. Choose Groups or Advanced with the radio buttons.

        Note

        It is recommended to use the PKI groups configuration.

        1. In Groups settings select the certificate entity for the MS host.

          For example: MS_engine. If the Certificate selector window is opened, it displays the unique identifier of the MS host and also certificate information, such as version, serial number, issue date and validity period, algorithms and keys. This information is useful when selecting which certificate to use.

        2. Select agents validator CA group.

          For example: MS_Host_CA. If the CA group selector window is opened, the CA group can be defined which is used to verify the certificate of the agents during the handshake. Data is available on CA group name, certificate name and certificate information for the selected CA groups.

          SSL settings

          Figure 13.16. SSL settings


          OR

        3. In Advanced settings enter manually the following data.

          • full path of the file where the private key is stored,

          • certificate,

          • CA directory identifying the directory where the CA certificate entities are stored,

          • and CRL directory giving the place of the CRLs corresponding to the CA

            screenshot

      13.2. Setting agent configuration parameters

      Agent configuration, similarly to MS configuration, is realized by setting the appropriate parameters. The following parameters can be configured for the agents.

      Parameter name   Description
      connection Connection settings for agents defining connection parameters
      engine Engines to connect configuring connection to engines
      log Log settings configuring logging parameters
      ssl SSL handshake settings configuring SSL settings for MS and agent connection

      Table 13.2. Agent configuration parameters.


      By using global settings it is possible to apply default values to the parameter set.

      Note

      It is recommended to use the global settings when no special configurations are needed.

      13.2.1. Configuring connections for agents

      With the Connection settings (connection) parameter it can be set which bind address and port you want to use for the connection between the agent and the MS. The agent receives or initiates the connection from / to the MS engine using this address. The waiting time and the number of retries can also be set. Note that it is recommended to allow MS to establish the connection towards the agent. The configuration process is identical to the one described in Section 13.1.3, Configuring the connection between MS and MC.

      For further information, see Section 13.2.1, Configuring connections for agents and Section 13.3, Managing connections.

      13.2.2. Configuring connection to engine

      With the Engines to connect (engine) parameter it can be configured to which engine the agent connects to. Generally, it is the engine that connects to the agent but if this parameter is set, the agent initiates the connection based on these settings.

      Note

      It is recommended to leave this parameter empty and allow the MS engine to establish the connection towards the agents.

      13.2.3. Procedure – Configuring logging for agents

      With the Log settings (log) parameter it can be definede which type of messages should be logged at the agent's side. Logtags can be applied to enable advanced message filtering and also determine where the messages should be logged to.

      1. Select the desired level of logging.

        The default value is level 3 which means that all important events are logged but no detailed debug information. The higher the value the more information is logged. The allowed levels are from 0 to 10.

      2. (OPTIONAL)

        Give log specification if it is required to fine-tune for which logtags which level of messages are logged, for example, core.debug: 2, session.error: 8,*. accounting:5..

      3. (OPTIONAL) Tick Syslog to log to syslog, otherwise the messages are logged to the standard output (STDOUT).

        Tip

        Logging on the console might be useful during troubleshooting.

      4. (OPTIONAL)

        Tick Tags to log the logtags as well.

        Tip

        This is beneficial if log messages need to be searched and analysed, but it requires more disk space.

        Setting up agent logging

        Figure 13.18. Setting up agent logging


      13.2.4. Procedure – Configuring SSL handshake parameters for agents

      With the SSL handshake settings (SSL) parameter the certificate verification parameters for the agent and other handshake-related information can be set to be used between the agent and the MS.

      1. Select verification level in the Verify depths field to decide how many levels are verified in the certificate hierarchy.

        Values from 0 to 100 are allowed.

      2. Choose Groups or Advanced with the radio buttons.

        Note

        It is recommended to use the PKI groups configuration.

        1. In Groups settings select the certificate entity for the agent.

          For example: MS_engine.

          If the Certificate selector window is opened, it displays the unique identifier of the MS host and also certificate information, such as version, serial number, issue date and validity period, algorithms and keys.

          Tip

          This information is useful when selecting which certificate to use.

        2. Select engine validator CA group.

          For example: MS_engine_CA.

          If the CA group selector window is opened, the CA group can be ddefined which is used to verify the certificate of the agents during the handshake. Data is available on CA group name, certificate name and certificate information for the selected CA groups.

          OR

        3. In Advanced settings enter manually the following data:

          • full path of the file where the private key is stored

          • certificate

          • CA directory identifying the directory where the CA certificate entities are stored

          • CRL directory giving the place of the CRLs corresponding to the CA

            screenshot

          Advanced settings of SSL connection parameters

          Figure 13.19. Advanced settings of SSL connection parameters


      13.3. Managing connections

      MS communicates with the PNS firewall software through agents. The vms-transfer-agent communicates using TCP port 1311.

      The initial connection between the firewall and MS needs manual set up, but all subsequent communication channels are established automatically. By default, the MS host initiates the connection towards the agents but agents can also establish the communication if configured to do so. You can administer the connections through the MS host main workspace and with the help of the Management / Connections menu.

      13.3.1. Setting up initial connection with management agents

      See Procedure 13.3.4, Configuring recovery connections.

      13.3.2. Configuring connection with agents

      Provide the MS port and address for the Connection settings for agents (connection) parameter, if the agents are required to initiate the connection towards the MS using this port and address. The same port and address should be given at the agent's side. If no values are provided, the default connection setup is carried out, that is, the MS connects to the agents using the given port and address.

      For further information, see chapter Section 13.2.1, Configuring connections for agents.

      13.3.3. Procedure – Administering connections

      1. Go to the main workspace of the MS host.

      2. Select Management connection.

        Setting the management connection

        Figure 13.20. Setting the management connection


      3. Query the connection status by choosing the Management connection option from the Host management menu.

        The Last result field shows the outcome of the previous connect or disconnect operation.

      4. (OPTIONAL)

        Stop or Start the communication by selecting the connection and clicking Connect or Disconnect.

        The managing connection window

        Figure 13.21. The managing connection window


      13.3.4. Procedure – Configuring recovery connections

      Configure a recovery connection in the following cases:

      • Connecting a new machine (firewall node) to the MS without bootstrapping (to set up the initial connection between MS and the PNS firewall).

      • Installing a new firewall machine to replace a previous one and configuring it based on MS data.

      The authentication in this case is done using a One-Time-Password (OTP) instead of certificates. After successful authentication, the MS receives the configuration data of the agent together with the necessary PKI information (certificate, key and CRL). All further authentication procedures will use this data. After the agent is restarted, the MS initiates the reconnection. The administration can be done as normal afterwards.

      Note

      The agent needs to be in OTP mode to be able to receive the connection.

      1. Login to the PNS host that you want to reconnect to MS.

      2. Reconfigure the vms-transfer-agent with the following terminal command:dpkg-reconfigure vms-transfer-agent-dynamic

      3. Enter a One-Time-Password (OTP) that the host will use to connect to MS into the window displayed. Enter a password, and store it temporarily for later use.

      4. Login to your Management Server using MC.

      5. Select the host that needs the recovery connection in MC, and click Recovery connection.

        Starting a recovery connection

        Figure 13.22. Starting a recovery connection


      6. Enter the same One-Time-Password (OTP) that you set during the installation on the host.

        Entering the one-time-password

        Figure 13.23. Entering the one-time-password


      7. Upload and reload the configuration of every component of the host.

      13.4. Handling XML databases

      The XML database in the MS is functionally divided into two parts and stores two basic types of information.

      • predefined information

        For example: proxies, or add-ons. This information is provided by BalaSys but other definitions can also be freely added.

      • configuration settings of your sites, hosts or components.

        These are the settings created in MC. The configuration files are generated on the basis of this data.

      MS loads the database information from the /var/lib/vms library and places it in the appropriate part of the XML database. During database save MS carries out the reverse process: takes the data from the XML database and saves it in the appropriate file and folder of the library.

      The /var/lib/vms is library is composed of the following folders and XML files.

      • vms_userdatabase.xml

        including your configuration settings created in the MC except for PKI information

      • vms_keymngt.xml

        including all PKI information and related user settings

      • configdb folder

        storing templates, databases, definitions necessary for the MS such as built-in proxies or other settings provided by BalaSys (configuration settings are excluded)

        Note

        Do not change the files in this folder because during upgrade the content of the folder is automatically overwritten. Necessary modifications should be stored in the configdb-user folder.

      • configdb-user folder

        containing your additional settings and templates for the MS in separate XML files

        (These files are modified versions of the configdb/vms_database.xml file). This data is not deleted during upgrade.

      • keymngt folder

        containing certificate entities (certificate, key and CRL files generated by PKI)

        Note

        Do not modify this folder.

      • backup folder

        storing by default the backup of the XML files and folders

        For further information, see chapter Section 13.1.2, Configuring backup.

      Chapter 14. Virus and content filtering using CF

      Virus, spam and other types of content filtering services are nowadays essential components of security solutions both in home and in production environments. Spam, viruses, trojans, malicious scripts pose a significant threat to the users through e-mails, downloadable files, or even simple webpages. Firewalls are a logical location for content filtering, as all traffic must travel across the firewall, and usually this is the earliest point where the incoming traffic can be examined and interacted with. CF provides an integrated framework to manage the various available Content Filtering components using a single, uniform interface.

      The sections of this chapter provide an in-depth description of CF. For a basic overview of the CF framework, Content Filtering and a list of the supported modules, see Section 2.1.6, The concept of the CF framework.

      14.1. Content Filtering basics

      Content Filtering is basically the act of inspecting the transmitted data (downloaded by a web browser, sent over SMTP, and so on) to detect and reject unwanted content. Depending on the environment and the circumstances, unwanted content can mean viruses, ad- or malware, spam e-mails, client-side scripts (that is, Java, JavaScript, and so on), or simply websites containing information not permitted for the users (such as adult sites, and so on).

      Note

      Content Filtering may seem to be similar to application level protocol inspection performed by PNS proxies. However, it is essentially different: the proxies only analyze the elements of the protocol, not the transferred data itself.

      The main types of content filtering are summarized below.

      • Virus filtering: The most classical and well-known form of Content Filtering is virus filtering: examining the files being transferred to verify that they do not contain any software that may harm the user's computer or infrastructure. Most virus filtering engines also detect adware and trojan programs. If a virus is detected, often it is possible to remove the virus (disinfect the file) without any side-effect.

      • Spam filtering: Spam filtering examines e-mails (usually in the SMTP traffic) to delete unwanted advertisements, viruses spreading through e-mails, and so on.

      • Disabling client-side scripts in HTML: Client-side scripting is a popular method for decreasing the load of webservers. It means that certain actions are performed on the client machine (for example, in a submission form a client-side script could check that all fields are filled, without having to connect to the server). However, such scripts can be exploited to perform virtually any operation on the client machine. Therefore, often they are disabled and completely removed from the webpages as they are downloaded.

      • General HTML content filtering: Access to certain webpages is also often limited based on the contents of the page — usually based on the keywords occurring in the page. Most commonly this takes the form of blacklisting/whitelisting, to deny access to pages containing prohibited or illegal content, or simply to pages not related to the everyday work of the organization.

      Content Filtering is possible using two approaches: file-based and stream-based filtering. File-based filtering is used when the complete object (file) is needed to perform the checking, such as in virus and spam filtering. (Virus filters cannot work on partial files.) Stream-based filtering monitors a continuous data flow (that is, a webpage being downloaded) and removes the prohibited contents (such as JavaScripts, images, and so on).

      14.1.1. Quarantining

      Objects (infected files, spam e-mails) rejected by the Content Filtering system are usually deleted. However, in some environments this is not acceptable: these objects might be temporarily stored in a location where they can do no harm (that is, in the quarantine), until it can be verified that they do not contain any useful information. The reason for using a quarantine is that occasionally information in a file might be needed even though the file is infected (disinfecting a file is not always possible, and sometimes may damage the non-infected parts of the file as well). Also, virus and spam filters are not unerring; occasionally they might detect a file/e-mail as infected/spam even when it is not. If rejected objects are simply deleted, important information might be lost in these cases.

      Tip

      In production environments it is recommended to use multiple content filtering engines to examine the same traffic to reliably detect viruses. PNS CF fully supports the use of multiple Content Filtering engines to inspect the same content.

      14.2. Content Filtering with CF

      This section describes how to initialize and configure CF. It is also described how rule groups, scanpaths, and other objects can be created and used to form Content Filtering policies. Setting up the communication between CF and PNS is described in Section 14.2.4, Configuring PNS proxies to use CF. Before starting to configure CF, add the Content Filtering component to the host (see Procedure 3.2.1.3.1, Adding new configuration components to host for details).

      14.2.1. Creating module instances

      Module instances are the elements of the available CF modules configured for a particular Content Filtering task. A data stream or an object can be inspected by one or more module instances. Module instances can be created and configured on the Modules tab of the Content Filtering MC component.

      The Modules tab of the Content Filtering component

      Figure 14.1. The Modules tab of the Content Filtering component


      The panel has two distinct sections, the left window manages stream module instances, the right window file module instances. The functionality and the handling of the two management interface are identical. Existing module instances are shown in the main section of the panel, organized into a tree based on the module they represent (NOD32, html, and so on). Below this list are buttons to create, delete and edit module instances.

      14.2.1.1. Procedure – Creating a new module instance

      1. Click on the New button below the stream and file module instances lists.

      2. Select the module to be configured from the Module combobox in the window that pops up.

        Selecting the module

        Figure 14.2. Selecting the module


      3. Enter a name (and optionally a description) describing the instance.

      4. Set the parameters of the module using the Module options section of the dialog window. The available options are module-specific and are described in Section 14.2.1.2, CF modules.

        Configuring module options

        Figure 14.3. Configuring module options


      14.2.1.2. CF modules

      Each module available in CF has its own parameters that can be set separately for each module instance. Some of the modules have global options that apply to all instances of that module, these are also described at the particular module.

      The clamav module

      The clamav module uses the Clam AntiVirus engine to examine incoming files. It supports only file mode and only detects infected files; it does not attempt to disinfect them. The module automatically scans archived files as well.

      The HTML module

      The HTML module can be used to filter various scripts and tags in HTML pages. It can operate both in file and stream modes.

      The HTML module

      Figure 14.4. The HTML module


      The HTML module has the following options:

      • Enable JavaScript filtering: Remove all JavaScripts. Enabling this option removes all javascript and script tags, and the conditional value prefixes (for example, onclick, onreset, and so on).

      • Enable ActiveX filtering: Remove all ActiveX components. Enabling this option removes the applet tags and the classid value prefix.

      • Enable Java filtering: Remove all Java code references. Enabling this option removes the java: and application/java-archive inclusions, as well as the applet tags.

      • Enable CSS filtering: Remove cascading stylesheet (CSS) elements. Enabling this option removes the single link tags, the style tags and options, as well as the class options.

      • Filter HTML tags: Custom filters can be added to remove certain elements of the HTML code using the New button. The filter can remove the specified values from HTML tags, single tags, options and prefixes (specified through the Filter place combobox). The Filter value specifies the name of the tag/header to be removed.

        Filtering HTML tags

        Figure 14.5. Filtering HTML tags


        The Filter place parameter has the following options:

        In tags: Remove everything between the specified tag and its closing tag. Embedded structures are also handled.

        In single tags: Remove all occurrences of the specified single tag. A single tag is a tag that does not have a closing element, for example, img, hr, and so on.

        In options: Remove options and their values, for example, width, and so on.

        In prefixes: Remove all options starting with the string set as Filter value. The on option will, for example, remove all options like onclick, and so on.

      Note

      The HTML module is designed to process only text data. It cannot handle binary data, thus directing binary files to the module should be avoided.

      The NOD32 module

      The NOD32 module uses the NOD32 virus filtering engine to examine incoming files. It supports only file mode.

      The NOD32 module

      Figure 14.6. The NOD32 module


      The module has the following parameters:

      • Scan packed: It enables or disables virus scanning on archived files.

      • Scan suspicious: It enables or disables virus scanning on suspicious files (for example, suspicious files are often new variants of known viruses).

      • Heuristic scan level: It defines the level of heuristic (non-database based) sensitivity. The available levels are OFF, and NORMAL.

      • Archive max size: It defines the maximum unpacked size (megabytes) of a single archive scanned. If a 2.5 MB .zip file, for example, contains a file that is 80 MB uncompressed, and the Archive max size option is set to 10 MB, the file will not be scanned for viruses. However, if the Archive max size option is set to 100 MB, CF will scan the file.

      Note

      The ESET's NOD32 module tries to resolve reverse host names of all the locally assigned IP addresses, on static, VLAN and TUN interfaces (except for loopbacks) for licensing purposes at CF startup. To ensure the fastest CF startup and restart, the reverse host names must be available through DNS service or via the hosts file. If there is a DNS service but the reverse names are not available, quick NXDOMAIN responses are sufficient as well. Without a DNS service the NOD32 plugin does not work and gives license activation related error log messages.

      The mail header filtering (mail-hdr) module

      The mail-hdr module can filter and maniputale e-mail headers in both stream and file modes. It scans the incoming e-mail (stream or file) using regular expressions and deletes or modifies the matching headers. New headers can also be inserted into the mails.

      Warning

      E-mail headers are processed and manipulated line-by-line. However, a header can span multiple lines.

      A single instance can include multiple filters; the order, these filters are processed, can be set using the arrow buttons. Each filter consists of a pattern, an action that is performed when the pattern is found, and an argument (for example, a replacement header). Note that not every action requires an argument.

      Filtering mail headers

      Figure 14.7. Filtering mail headers


      A filter has the following parameters:

      • Header pattern: It is the search string to be found in the headers. Regular expressions can be used. The following options are also available for regular expressions:

      • Action: It is the action to be performed on the header line or the whole message if the pattern is found in the message. The following actions are available:

        • Append: Add the argument of the filter as a new header line after the match.

        • Discard: Discard the entire e-mail message. The argument is returned to the mail server sending the message as an error message.

        • Ignore: Remove the matching header line from the message.

        • Pass: Accept the matching header line. This action can be used to create exceptions from other filter rules.

        • Prepend: Add the argument of the filter as a new header line before the match.

        • Reject: Reject the entire e-mail message. The argument is returned to the sender of the message as an error message.

        • Replace: Replace the matching header line to the argument of the filter.

      • Argument: The Regular expression will be replaced with this string if found in the stream. The replacement can contain \n (n being a number from 1 to 9, inclusive) references, which refer to the portion of the match which is contained between the nth \( and its matching \). Also, the replacement can contain unescaped & characters which will reference the whole matched portion of the pattern space.

      • Case insensitive: The case sensitive mode can be disabled by selecting this checkbox.

      The mime module

      The mime module inspects and filters MIME objects (that is, mail attachments). It can check the MIME headers that describe the objects for validity, and also call a virus filtering CF module to scan the object for viruses. The mime module supports only file mode. The module has the following parameters:

      • Maximum number of headers: It is the maximal number of headers permitted in a MIME object. The object is removed if it exceeds this limit.

      • Maximum length of a header: It is the maximal length of a header in characters. It applies to the total length of the header. The header is removed if it exceeds this limit.

      • Maximum length of a header line: It is the maximal length of a header line in characters. It applies to every single line of the header. The header is removed if it exceeds this limit.

      • Ignore invalid headers: If it is enabled, headers not complying to the related RFCs or violating the limits set in the previous options are automatically removed (dropped).

        Warning

        If Ignore invalid headers is disabled and an invalid header is found, the entire object (for example, e-mail) is rejected.

      • Silently drop rejected attachment: By default, the mime CF module replaces the removed objects (attachments) with the following note that informs the recipient of the message about the removed attachments: The original content of this attachment was rejected by local policy settings. If the Silently drop rejected attachment option is enabled, no note is added to the e-mail.

      • Enable rewriting messages: If it is disabled, the mime module does not modify the messages.

      • Set mime entity to append: The mime CF module can automatically add a MIME object to the inspected messages. To use this feature, verify that the Enable rewriting messages option is enabled, select Set mime entity to append, paste the MIME object into the appearing dialog box, and select OK.

      Options of the mime module

      Figure 14.8. Options of the mime module


      To scan the actual MIME objects (for example, the attachments of an e-mail) for viruses, a special rule group has to be created, called mime-data. Use this as the name of the rule group, and add a virus filtering module (for example, clamav) to this rule group. When the mime module is scanning an e-mail message, it will inspect the attachments, then pass the attachment to the mime-data rule group to scan for viruses. See Section 14.2.3, Routers and rule groups for details on creating rule groups.

      The program module

      The program module is a general wrapper for third-party applications capable of working in stream or file mode. The stream or file is passed to the application set in the Program field.

      A single instance can include multiple filters; the order these filters are processed can be set using the arrow buttons.

      A program module has the following parameters:

      • Program: It is the application to be executed.

      • Timeout: If the application set in the Program field does not provide a return value within this interval, it is assumed to be frozen.

      • The program may modify the data: The program may make changes to the data and return the modified version to CF.

      The stream editor (sed) module

      The sed module is a stream editor capable of working in both stream and file mode. It scans the target stream and replaces the string to be found (specified as a regular expression) with another string.

      The stream editor module

      Figure 14.9. The stream editor module


      Warning

      This module is similar to, but not identical with the common UNIX sed command.

      A single instance can include multiple filters; the order these filters are processed can be set using the arrow buttons.

      Filtering with the stream editor module

      Figure 14.10. Filtering with the stream editor module


      A filter has the following parameters:

      • Regular expression: It is the search string to be found in the stream. Regular expressions can be used. The following options are also available for regular expressions:

      • Replacement: The Regular expression will be replaced with this string if found in the stream. The replacement can contain \n (n being a number from 1 to 9, inclusive) references, which refer to the portion of the match which is contained between the nth \( and its matching \). Also, the replacement can contain unescaped & characters which will reference the whole matched portion of the pattern space.

      • Global: Replace all occurrences of the search string. If it is not checked in, the filter will replace only the first occurrence of the string.

      • Case insensitive: Disable case sensitive mode.

      The spamassassin module

      The spamassassin module uses the Spamassassin spam filtering engine to examine incoming e-mails. It supports only file mode.

      The Spamassassin module

      Figure 14.11. The Spamassassin module


      The module has the following parameters:

      • Policy-related options

        • Reject messages over the threshold: Reject the message only if it is in spam status. By default, SpamAssassin rejects all e-mails with a spam status (called required_score in SpamAssassin terminology) higher than 5 as spam. However, to minimize the impact of false positive alarms, if the spam status of an email (as calculated by SpamAssassin) is over the required_score, but below the value set in threshold, CF only marks the e-mail as spam, but does not reject it. If the spam status of an e-mail is above the threshold, it is automatically rejected.

        • Reject messages as spamd dictates: Reject all e-mails detected as spam by SpamAssassin.

        • Add spam related headers to accepted messages: Append headers to the e-mail containig information about SpamAssassin, the spam status of the e-mail, and so on. Sample headers are presented below.

          X-Spam-Checker-Version: SpamAssassin 3.0.3 (2005-04-27) on mailserver.example.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00 autolearn=ham version=3.0.3
      • Server address

        • Local: SpamAssassin is running on the same host as CF. In this case, communication is performed through a UNIX domain socket.

        • Network: SpamAssassin is running on a remote machine. Specify its address and the port SpamAssassin is accepting connections in the Host and Port fields, respectively.

      • Other options

        • Profile name: The user under which SpamAssassin should filter e-mails. Default value: not set, the user running SpamAssassin is used (usually nobody).

        • Timeout: It is the timeout value for SpamAssassin.

      The ModSecurity module

      Proxedo Network Security Suite is already capable of providing protection for various web servers with SSL termination, however, the proxy, controlling the HTTP protocol is responsible for following the RFC, the Content Filtering System NOD32 and other modules are responsible for the virus filtering of the transferred content. Now, these solutions can be complemented with a web application-level security gateway module.

      ModSecurity can be integrated to PNS's HTTP proxy with the help of the Content Filtering (CF) module. It ensures an additional, independent level of protection for the web servers, achieving this with the help of the transferred HTTP headers, the concurrent analysis of data and the application of the relevant policies (Free: 'OWASP ModSecurity Core Rule Set (CRS) Version 3' or professional: 'Commercial Rules from Trustwave SpiderLabs'). With the help of this solution the malware and non-trustworthy HTTP requests get blocked usually already on the Proxedo Network Security Suite and cannot reach the web server.

      14.2.2. Creating scanpaths

      Scanpaths are lists of module instances and some settings (trickling mode, quarantining, and so on) specific to the given scanpath. Traffic directed to a scanpath is inspected by the module instances specified in the scanpath. Scanpaths can include both stream and file modules, but always the stream modules process the data first.

      Scanpaths can be managed (created, deleted and edited) from the Scanpath section of the Configuration tab of the Content Filtering MC component. To configure a new scanpath, the following procedure must be completed:

      14.2.2.1. Procedure – Creating a new scanpath

      1. Click on New in the Scanpath section of the Configuration tab of the Content Filtering MC component.

        Creating a new scanpath

        Figure 14.12. Creating a new scanpath


      2. Enter a name (and optionally a description) for the scanpath.

      3. Use the Add buttons and select the stream and/or file module instances to be added to the scanpath.

        Adding module instances to a scanpath

        Figure 14.13. Adding module instances to a scanpath


        It is not necessary to use existing module instances, they can also be created on-the-fly using the New button of the Module instance selection dialog window.

        Selecting module instances

        Figure 14.14. Selecting module instances


        The data is processed in the order the module instances are listed (starting always with the stream modules). The order of instances can be changed using the arrow buttons below the lists.

      4. Set and configure the quarantine and trickling mode to be used for the scanpath. Also set the policy for large files. These options are described in Section 14.2.2.2, Scanpath options.

      14.2.2.2. Scanpath options

      The following sections describe the options available for scanpaths. The options can be configured on the General, Trickle, Options tabs of the scanpath dialog.

      Quarantine and oversized file options

      Quarantine mode specifies when an infected object has to be put into quarantine. The original file is always stored.

      Configuring quarantine policy

      Figure 14.15. Configuring quarantine policy


      • Always: Quarantine all objects.

      • When rejected: Quarantine objects that could not be disinfected or rejected for any reasons.

      • When modified or rejected: Quarantine the modified or infected objects. Modification is completed by 'sed' and 'mailhdr' modules. Quarantine only the original version of the files which have been successfully disinfected. For example, if an infected object is found but it is successfully disinfected, the original (infected) object is quarantined. This way, the object is retained even if the disinfection damages some important pieces of information.

      • Never: Disable quarantining; objects rejected for any reasons are dropped.

      Bypass scanning of large files: By default, all files arriving to the scanpath are scanned. However, this might not be optimal for performance reasons, particularly if large files (for example, ISO images) are often downloaded through the firewall. Therefore, it is possible to specify an Oversize threshold value and an Oversize action. If the Bypass scanning of large files checkbox is selected, objects larger than Oversize threshold (in bytes) are not scanned, but accepted or rejected, based on the settings in Oversize action. It is also possible to return an error message for the oversized files.

      Configuring trickle mode

      Content filtering cannot be performed on partial files — the entire file has to be available on the firewall. The file will only be sent to the client if no virus was found (or the file was successfully disinfected). Instead of receiving the data in a continuous stream, as when connecting to the server “regularly”, the client does not receive any data for a while, then it “suddenly” starts to flow. This phenomena is not a problem for small files, since these are transmitted and checked fast, probably without the user ever noticing the delay, but can be an issue for larger files when the client application might time out. It can also be inconvenient when the bandwidth of the network on the client and server side of the firewall is significantly different. In order to avoid time outs, a solution called trickling is used. This means that the firewall starts to send small pieces of data to the client so it feels that it is receiving something and does not time out. For further information on trickling, see the Virus filtering and HTTP Technical White Paper available at the BalaSys Documentation Page. CF supports the following trickling modes. These can be set on the Trickle tab of the scanpath editor dialog.

      Configuring trickling

      Figure 14.16. Configuring trickling


      • No trickling: Trickling is completely disabled. This may result in many connection timeouts if the processing is slow, or large files are downloaded on a slow network.

      • Percent: Determine the amount of data to be trickled based on the size of the object. Data is sent to the client only when CF receives new data; the size of the data trickled is the set percentage of the total data received so far.

      • Steady: Trickle fixed amount of data in fixed time intervals. Trickling is started only after the period set in Initial delay before the first packet. If the whole file is downloaded and processed within this interval, no trickling is used.

      Tip

      It is recommended to use the percent-based trickling method, because the chance of an operable virus trickling through the system unnoticed is higher when steady trickling is used.

      Automatic decompression and error handling

      To enable MIME-type checking for the files, set the Force the mime-type detection checkbox to active state.

      CF can automatically decompress gzipped files and transmission and pass the uncompressed data to the modules. After the modules process the data, CF can recompress it and return it to PNS. To enable the automatic decompressing, select the Transparently decompress gzipped data checkbox.

      The following actions can be set for the gzip headers through the Gzip header strip combobox:

      • Leave all gzip headers intact: Do not modify the headers.

      • Leave filename headers intact: Retain the headers containing filenames, remove all the other ones.

      • Remove all gzip headers: Strip all headers.

      The compression level of the recompressed data can be set using the Recompression level spinbutton.

      Note

      Compression level of 5 or higher can significantly increase the load on the CPU.

      Automatic decompression, error handling and mime-type detection

      Figure 14.17. Automatic decompression, error handling and mime-type detection


      In some cases it is possible that a module or CF cannot check an object for some reason (for example the file is corrupted, or the license of the module has expired). In such situations CF rejects all objects by default. Exemptions can be set for the errors described below: check the errors for which all objects should be accepted.

      • Corrupted file: The file is corrupt and cannot be decompressed. Certain virus scanning modules handle encrypted or password-protected files as corrupted files.

      • Encrypted file: The file is encrypted or password-protected and cannot be decompressed.

      • Unsupported packed file: The file is compressed with an unknown/unsupported compression method and cannot be decompressed.

      • Engine warning: The file is suspicious, heuristic virus scanning detected the file as possibly infected.

      • OS error: A low level error occurred (the module ran out of memory, or could not access the file for some reason).

      • Engine error: An internal error occurred in the module while scanning the file.

      • License error: The license of the module has expired.

      14.2.3. Routers and rule groups

      Routers are simple conditional rules (that is, if-then expressions) that determine how the received object has to be inspected. They consist of a condition and a corresponding action: if the parameter of the traffic (or file) matches the set condition, then the action is performed. The condition consists of a variable and a pattern: the condition is true if the variable of the inspected object is equal to the specified pattern. The action can be a default action (for example, ACCEPT, REJECT, and so on) or a scanpath. Routers cannot be used on their own, they must belong to a rule group. A rule group is a list of routers, defining a set of conditions that are evaluated one-by-one for a given scenario. Rule groups also have a default action or scanpath that is performed if none of the set conditions match the received object. Rule groups are also important because a PNS proxy can send data only to a rule group, and not to a specific router (see Section 14.2.4, Configuring PNS proxies to use CF for details).

      Warning

      Only the action or scanpath, corresponding to the first matching condition is performed, therefore the order of the routers in a rule group is very important.

      Routers and rulegroups

      Figure 14.18. Routers and rulegroups


      Routers and rule groups can be managed (created, deleted and edited) from the Rule groups section of the Configuration tab of the Content Filtering MC component. The defined rule groups and their corresponding routers (conditions and actions) are displayed as a sortable tree.

      Tip

      Rule groups and routers can be disabled from the local menu if they are temporarily not needed.

      To create and configure a set of routers, complete the following procedure:

      14.2.3.1. Procedure – Creating and configuring routers

      1. Navigate to the Configuration tab of the Content Filtering MC component.

      2. Click on New rule group. Enter a name for the rule group (scenario) and select a default action or specify a scanpath to be used as default.

        Creating a new rulegroup

        Figure 14.19. Creating a new rulegroup


      3. Select the newly created rule group, and click on New to add a new router to the group.

        Add a new router to the group

        Figure 14.20. Add a new router to the group


      4. Select the action to be performed if the conditions match using the Target scanpath combobox. The available actions are described in Section 14.2.3.2, Router actions and conditions.

      5. Click on New, and define a condition for the router. Select the variable to be used from the Variable combobox, and enter the search term to the Pattern field. Wildcards (for example, '*', '?') can be used in the pattern. If the Variable of the inspected object matches Pattern, the action specified in Target scanpath will be performed.

        Note

        A router can contain multiple conditions. In this case the target action is performed only if all specified conditions are true.

        Creating a condition for the router

        Figure 14.21. Creating a condition for the router


      6. Add as many routers to the rule group as required.

        Warning

        The routers are evaluated sequentially, therefore it is important to list them in a correct order. The order of the routers in a rule group can be modified using the arrow buttons below the routers tree. The object is only inspected with the scanpath of the first matching router.

        A configured scanpath with routers and conditions

        Figure 14.22. A configured scanpath with routers and conditions


      14.2.3.2. Router actions and conditions

      The following actions are available:

      • ACCEPT: Accept (allow it to pass the firewall) the object.

      • ACCEPT-QUARANTINE: Accept the object, but also store a copy of it in the quarantine.

      • REJECT: Drop the object.

      • REJECT-QUARANTINE: Drop the object, but store a copy of it in the quarantine. That way, it can be retrieved later if needed.

      • Scanpath: Inspect the object according to the specified scanpath.

      The following table describes some of the variables that can be used in the conditions. This table does not list all such variables, as new variables are added periodically. For an up-to-date list of the available variables see vcf.cfg(5) in Proxedo Network Security Suite 2 Reference Guide, or issue the man CF.cfg command from a shell on the CF host.

      • PNS_protocol: It is the protocol used to transfer the object.

      • file_name: It is the file name or URL of the object.

      • content_type: It is the MIME type of the object as specified by the peer.

      • PNS_server_address.ip: It is the IP address of the server.

      Tip

      The Variable combobox used to create new conditions lists all available variables.

      14.2.4. Configuring PNS proxies to use CF

      CF can only inspect files or streams it receives from PNS proxies. PNS proxies send data to CF as they would stack another proxy.

      The following procedure describes how to configure the communication between PNS proxies and CF.

      14.2.4.1. Procedure – Configuring communication between PNS proxies and CF

      1. First, the connection settings of CF have to be configured in the Bind section on the Global tab of the Content Filtering MC component. Specify either the IP address/port pair on which CF should accept connections, or the Local radiobutton if CF will communicate with PNS through UNIX domain sockets.

        Note

        The same bind settings will have to be used when the Stacking provider is configured in the Policies tab of Application-level Gateway MC component (see Section 6.7.12, Stacking providers for details). These settings are required because PNS and CF do not necessarily run on the same hosts.

        The connection settings of CF

        Figure 14.23. The connection settings of CF


      2. Navigate to the Policies tab of the Application-level Gateway MC component and create a new Stacking Provider. Specify the same connection settings to this stacking provider as set to CF in the previous step.

        Note

        A Stacking provider can contain the connection parameters (that is, IP/port pair) of multiple CF hosts. If more than one hosts are specified, PNS will automatically balance the load sent to these hosts using the round-robin algorithm.

        The connection settings of PNS and CF 1/2

        Figure 14.24. The connection settings of PNS and CF 1/2


        The connection settings of PNS and CF 2/2

        Figure 14.25. The connection settings of PNS and CF 2/2


      3. Navigate to the Proxies tab of the Application-level Gateway MC component, and select the proxy class that will send the data to CF for inspection. This can be an existing or a newly derived proxy class (for example, MyFtpProxy).

        Using the Stacking provider in a proxy

        Figure 14.26. Using the Stacking provider in a proxy


      4. Add the desired stack attribute of the proxy to the Changed config attributes (for example, self.request_stack). For details on the stack attributes of the different proxy classes see the description of the proxy class in Chapter 4, Proxies in Proxedo Network Security Suite 2 Reference Guide.

      5. Select the stack attribute and click on Edit. Click on New, and add a key identifying the element of the particular protocol that should be sent over to CF for inspection (for example, the * parameter). For details, see the description of the proxy class in Chapter 4, Proxies in Proxedo Network Security Suite 2 Reference Guide.

        Adding a key, identifying an element of a protocol

        Figure 14.27. Adding a key, identifying an element of a protocol


      6. Enable stacking by setting the Type attribute to type_ftp_stk_data of the key using the combobox of the Type column, then click Edit.

      7. Click on Edit, select the PNS_stack attribute in the appearing window, and click again on Edit.

        Stacking a provider

        Figure 14.28. Stacking a provider


      8. Set Stacking type to Stacking provider. Select the stacking provider configured in Step 2 from the Provider combobox, and the rule group to be used from the Stacking information combobox.

        Selecting the stacking provider and the rulegroup

        Figure 14.29. Selecting the stacking provider and the rulegroup


      14.2.5. Managing CF performance and resource use

      A number of global settings affecting the performance and resource use of CF can be configured on the Global tab of the Content Filtering MC component. These are discussed in the following sections.

      14.2.5.1. Logging in CF

      The parameters related to logging in CF are the following:

      • Log level: It is the verbosity level of the logs. Level 3 is the default value, and is usually sufficient. Level 0 produces no logs messages, while 10 logs every small event, and should only be used for debugging purposes.

      • Use message tags: Enable the logging of message tags.

      Configuring the logging of CF

      Figure 14.30. Configuring the logging of CF


      14.2.5.2. Memory and disk usage of CF

      CF has a number of options governing the memory and hard disk usage behavior of CF. These resources are mainly used to temporarily store the objects while being inspected, decompress archived files, and so on.

      Configuring the memory and disk usage in CF

      Figure 14.31. Configuring the memory and disk usage in CF


      The following memory usage settings are available:

      • Max. disk usage: It defines the maximum amount of hard disk space that CF is allowed to use.

      • Max. memory usage: It sets the maximum amount of memory that CF is allowed to use.

      • Low and high water mark: CF tries to store everything in the memory if possible. If the memory usage of CF reaches high water mark, it starts to swap the data onto the hard disk, until the memory usage decreases to low water mark.

      • Max. non-swapped object size: Objects smaller than this value are never swapped to hard disk.

      • Content-type preview: This parameter determines the amount of data (in bytes) read from MIME objects to detect their MIME-type. Higher value increases the precision of MIME-type detection. Trying to detect the MIME-type of objects is required because there is no guarantee that a MIME object is indeed what it claims to be.

      • Thread limit: This value defines the number of threads CF can start. The graphical user interface now sets the CF default Thread limit to 100. It is possible though to set a different value for the Thread limit. Set this value according to the anticipated number of stacked connections. If the Thread limit is too low, the PNS proxies stacking CF will experience delays and refused connection attempts. A suggested method to calculate this number, is to monitor the log for "Too many running threads, waiting for one to become free" line and to increase PNS/AS/CF Thread limit parameter accordingly.

        Note, that if the Thread limit is already set with the CF "--threads=" option in the init script, then that option takes precedence. The value defined in the init script is cleared with each upgrade though, therefore this value can easily be updated at each upgrade to the preferred value, defined in the GUI.

      14.3. Quarantine management in MC

      All CF modules use a common quarantine on each host. The contents of the quarantine on a particular host can be accessed through the Quarantine icon that is available on the Host, Application-level Gateway, and CF components. The main part of this window shows a list of the quarantined files, including columns of meta-information like their date, size, why they were quarantined, and so on. For a detailed list of the possible meta-information see Section 14.3.1, Information stored about quarantined objects. The objects in the quarantine can be sorted by clicking on any of these columns. The order of the columns can be simply modified by dragging the column header to its desired place.

      The quarantine viewer

      Figure 14.32. The quarantine viewer


      The Quarantine contents window is a Filter window, thus various simple and advanced filtering expressions can be used to display only the required information. For details on the use and capabilities of Filter windows, see Section 3.3.10, Filtering list entries.

      The lower section of the Quarantine contents window contains a command bar to manipulate the selected objects, and a preview box displaying the first 4 Kb of the file. The following options are available from the command bar:

      • View: Display the entire file in a new window.

        Open with: Open the object with the specified application. The application will be started on the local machine running MC.

        Save as: Save the object to the local machine running MC.

        Send as e-mail: Send the selected object(s) as e-mail attachment to the destination address specified in the appearing dialog window.

        Forward as e-mail: Forward the selected e-mail to the destination address specified in the appearing dialog window. This option is available only to quarantined e-mails.

        Delete: Delete the selected object(s).

      Tip

      Delete and Send as e-mail can be used at once on multiple selected objects.

      In case of clusters, the command bar includes a node-selection combobox, that allows to display the contents of all nodes, or only a specified one.

      14.3.1. Information stored about quarantined objects

      The following meta-information is stored about the objects in the quarantine:

      • Client address: It sets theIP address and the port of the client receiving the quarantined object.

      • Client zone: It is the zone that the client belongs to.

      • Date: It is the date when the object was quarantined.

      • Description: It provides a detailed description of the verdict.

      • Direction: It is the direction the quarantined object was transferred to (that is, upload or download).

      • Detected type: It is MIME-type of the quarantined object as detected by CF.

      • File: It is the file name or URL of the quarantined object.

      • File ID: It defines a unique identifier of the file in the quarantine.

      • From: It sets the sender address (in case of e-mails).

      • Group: It is the user who tried to access the object belongs to the listed usergroups.

      • Kind: It identifies the kind of the quarantined content: file, e-mail, or newsnet post.

      • Method: It is the HTTP method (for example, GET, POST) in which the quarantined object was detected.

      • Program: It defines the program that quarantined the object (usually CF or PNS).

      • Protocol: It sets the protocol in which the quarantined object was found.

      • Proxy: It is the name of the proxy class that requested Content Filtering on the quarantined object.

      • Recipient: It is the envelope recipient addresses of the object (only in SMTP).

      • Reason: It describes the reason why the object was quarantined (for example, detected as virus, spam, and so on).

      • Rule group: It is the CF rule group that was stacked by the proxy.

      • Scanpath: It sets the scanpath that quarantined the object.

      • Sender: It is the envelope sender address of the object (only in SMTP).

      • Server address: It identifies the IP address and the port of the server sending the quarantined object.

      • Server zone: It sets the zone that the server belongs to.

      • Session ID: It is the ID of the session which requested Content Filtering on the quarantined object.

      • Size: It defines the Size of the object in bytes.

      • Spam status: It indicates if the e-mail is detected as spam.

      • Subject: It describes the subject of the e-mail.

      • To: It is the recipient address (in case of e-mails).

      • Type: It defines the MIME-type of the quarantined object according to its MIME header.

      • User: It identifies the name of the user who tried to access (for example, download) the object.

      • Verdict: It is the decision that caused the object to be quarantined (for example, REJECT, ACCEPT_QUARANTINE, and so on)

      • Viruses: It describes the virus(es) detected in the object.

      Naturally, only the information relevant to the specific object is available, for example, an infected file downloaded through HTTP does not have subject, and so on.

      14.3.2. Configuring quarantine cleanup

      Quarantine cleanup on a host can be configured from the Quarantine tab of the given Host component in MC. This interface can be used to create rules that determine when objects are deleted from the quarantine.

      Configuring quarantine cleanup rules

      Figure 14.33. Configuring quarantine cleanup rules


      The main section of the tab displays the currently effective rules (including disabled ones), and the control buttons for managing (creating, deleting, and editing) them. A rule consists of a filter that determines the scope (the effected objects) of the rule, and limitations on the storage of such objects. The available filtering expressions are the same as used in the Advanced filter optons of the Quarantine contents panel (see Section 14.3, Quarantine management in MC). The following options can be used to set limitations on the stored objects:

      • Size: It defines the maximum hard disk space used to store the objects. Objects exceeding this limit are deleted (starting with the oldest object).

      • Number of objects: It sets the maximum number of stored objects. Objects exceeding this limit are deleted (starting with the oldest object).

      • Maximum object age: The objects older than the specified value are deleted (starting with the oldest object).

      Creating new cleanup rules

      Figure 14.34. Creating new cleanup rules


      The limitations only apply to the objects matching the set filter expression. For example, setting the filter expression to Result-Virus matches X and the Size limit to 256 MBytes means that a maximum of 256 MBytes of objects will be stored in the quarantine that are infected with the X virus.

      Tip

      The limitations are global if no filter is specified.

      Rules are evaluated and executed sequentially; if contradicting rules are found, the strictest one will be effective.

      Chapter 15. Connection authentication and authorization

      User authentication verifies the identity of the user trying to access a particular network service. When performed on the connection level, it enables the full auditing of the network traffic. Authentication is often used in conjunction with authorization — allowing access to a service only to clients who have the right to do so.

      15.1. Authentication and authorization basics

      Authentication is a method to ensure that certain services (access to a server, and so on) can be used only by the clients allowed to access the service. The process generally called as authentication actually consists of three distinct steps:

      • Identification: Determining the clients identity (for example, requesting a username).

      • Authentication: Verifying the clients identity (for example, requesting a password that only the real client knows).

      • Authorization: Granting access to the service (for example, verifying that the authenticated client is allowed to access the service).

        Note

        It is important to note that although authentication and authorization are usually used together, they can also be used independently. Authentication verifies the identity of the client. There are situations where authentication is sufficient, because all users are allowed to access the services, only the event and the user's identity has to be logged. On the other hand, authorization is also possible without authentication, for example if access to a service is time-limited (for example, it can only be accessed outside the normal working hours, and so on). In such situations authentication is not needed.

      Verifying the clients identity requires an authentication method based on something the client knows (for example, password, the response to a challenge, and so on), or what the client has (for example, a token, a certificate, and so on). Traditionally, firewalls authenticate the incoming connections based on the source IP of the connection: if a user has access (can log in) to that computer, he has the right to use the services. However, there are several problems with this approach. IP addresses can be easily forged (especially on the local network), and are not necessarily static (for example, when Dynamic Host Configuration Protocol (DHCP) is used). Furthermore, this method cannot distinguish the different users who are using a single computer (for example, in a terminal server or hot-desking environment). For these reasons, authentication is most commonly left to the server application providing the particular service. However, PNS is capable to overcome these problems in a simple, user-friendly way.

      15.1.1. Inband authentication

      Most protocols (for example, HTTP, FTP) capable of authentication offer only inband authentication, meaning that the client must authenticate himself on the server. The advantage of inband authentication is that it is an internal part of the protocol, and most client applications support it. The disadvantage is that many protocols do not support any form of authentication, and those that do, support only a few authentication methods. Usually in an organization it is desirable to use only a single (strong) authentication method, however, not all protocols are suitable for all methods.

      Note

      A few protocols support authentication on the firewall as well, in this case the client actually has to authenticate himself twice: once on the firewall, once on the server.

      15.1.2. Outband authentication

      Outband authentication is performed independently of the service and the protocol in a separate communication channel. Consequently, any protocol can be authenticated, and the authentication method does not depend on the protocol. That way every protocol can be authenticated with a single authentication method. The only disadvantage of outband authentication is that a special client application (for example, the Authentication Agent) has to be installed and configured on all client machines.

      The process of outband authentication using the Authentication Agent is illustrated on the figure below.

      15.1.2.1. Procedure – Outband authentication using the Authentication Agent

      Outband authentication in PNS

      Figure 15.1. Outband authentication in PNS


      1. The client tries to connect to the server.

      2. PNS connects to the Authentication Agent of the client.

      3. The actual authentication is performed:

        1. username

        2. authentication method selection

        3. authentication according to the selected method

      4. If the authentication is successful, the connection to the server is established.

      15.2. The concept of AS

      AS is a tool that allows PNS services to be authenticated against existing user databases (backends). AS does not itself provide authentication services, it only mediates between PNS and the backends. The traditional access control model in PNS is based on verifying that a connection requesting a service from a source zone is allowed to access the service, and that the service is allowed to enter the destination zone. That is:

      • The client must belong to a zone where the particular service can be initiated (outbound service).

      • The service must be allowed to enter (inbound service) the destination zone.

      Using AS to authenticate the connections adds further requirements to this model: the client must successfully authenticate himself, and (optionally) must be allowed to access the service (that is, authorization can be also required). The actual procedure is as follows:

      When the client initiates a connection, it actually tries to use a PNS service. PNS checks if an authentication policy is associated to the service. If an authentication policy is present, PNS contacts a AS server (the authentication provider specified in the authentication policy). The AS server can connect to a user database (a backend) storing user information (passwords, certificates, and so on) required for a particular authentication method. (The type of the database determines which authentication methods can be used.) Each instance of a AS server can connect to a single backend, but multiple instances can be run from AS. When an authentication request arrives from PNS, AS evaluates a number of configured routers: rules that determine which instance should be used to authenticate the connection. This decision is based on meta-information of the connection received from PNS (for example, Client-IP, Username, and so on). The selected instance connects the client and the authentication is performed. The authentication can take place within the protocol (inband), or using a dedicated Authentication Agent (outband).

      The operation of AS

      Figure 15.2. The operation of AS


      If the authentication is successful, PNS verifies that the client is allowed to access the service (by evaluating the authorization policy). If the client is authorized to access the service, the server-side connection is built. The client is automatically authorized if no authorization policy is assigned to the service.

      Authorization in PNS

      Figure 15.3. Authorization in PNS


      15.2.1. Supported backends and authentication methods

      AS currently supports the following database backends:

      • AS_db: AS_db authenticates users against an LDAP database, supporting the following authentication methods: username/password, S/Key, CryptoCard RB1, LDAP binding, GSSAPI/Kerberos5, and x.509.

      • file: It is an authentication through the traditional htpasswd file. it supports the username/password authentication method.

      • PAM: It means an authentication using Pluggable Authentication Modules. Any authentication method supported in PAM can be used.

      • RADIUS: It is an authentication using a RADIUS server. It supports the username/password and challange/response authentication methods.

      15.3. Authenticating connections with AS

      This section describes how to initialize and configure AS, and how to create the required PNS policies.

      15.3.1. Configuring AS

      The various parameters of AS can be configured on the Authentication Server MC component. Before starting to configure AS, add this component to the host (see Procedure 3.2.1.3.1, Adding new configuration components to host for details).

      Adding the Authentication server component to the host

      Figure 15.4. Adding the Authentication server component to the host


      15.3.1.1. Configuring backends

      AS instances using specific database backends can be configured in the Instances section of the Authentication Server MC component. The existing instances and the type of database they use are displayed in a list; instances can be created, deleted, and modified using the control buttons below the list.

      Note

      Only unused instances can be deleted; if an instance is used in a router, the router has to be modified or deleted first.

      To create a new instance, complete the following procedure:

      15.3.1.1.1. Procedure – Creating a new instance

      1. Navigate to the Authentication Server MC component, and click on New in the Instances section.

        Creating a new instance

        Figure 15.5. Creating a new instance


      2. Enter a name for the instance and select the type of the database this instance will connect to from the Authentication backend combobox. Options specific to the selected backend type will be displayed.

        Selecting backend type

        Figure 15.6. Selecting backend type


      3. Configure the options of the backend. The available backends and their options are described in the following chapters. The permitted authentication methods can be also selected here.

      The AS_db backend

      The AS_db backend authenticates users against an LDAP database using the Microsoft Active Directory, the POSIX, or the Novell eDirectory/NDS scheme.

      The AS_db backend

      Figure 15.7. The AS_db backend


      The backend has the following settings:

      • Fake user: Enable authentication faking. This requires a valid user account in the LDAP database that is exclusively used for this purpose. The user name of this account has to be set in the corresponding textbox.

        Note

        All backends are capable of authentication faking. This is a method to hide the valid usernames, so that they cannot be guessed (for example using brute-force methods). If somebody tries to authenticate with a non-existing username, the attempt is not immediately rejected: the full authentication process is simulated (for example, password is requested, and so on), and rejected only at the end of the process. That way it is not possible to determine if the username itself was valid or not. It is highly recommended to enable this option.

      • LDAP connection settings

        • Host: It is the IP address of the LDAP server.

        • Port: It is the port number of the LDAP server.

        • Use SSL: Enable SSL encryption to secure the communication between AS and the backend.

        • Bind DN: Bind to this DN before accessing the database.

        • Set Bind password: It shall be the password to use when binding to LDAP.

      • LDAP search settings:

        • Base DN: Perform queries using this DN as base.

        • Filter: Search for accounts using this filter expression.

        • Scope: It specifies the scope of the search. base, sub, and one are acceptable values, specifying LDAP_SCOPE_BASE, LDAP_SCOPE_SUB, and LDAP_SCOPE_ONE, respectively.

        • Username is a DN: it Indicate that the incoming username is a fully qualified DN.

        • Follow referrals: If this option is set, AS will respect the referral response from the LDAP server when looking up a user.

        • Scheme: Specify LDAP scheme to use: Active Directory, POSIX, or NDS style directory layout.

          Note

          Make sure to set Scheme to Active Directory when using a Microsoft Active Directory server as a database backend.

      • Authentication methods: Select and configure the allowed authentication methods.

        • Password: It implements password authentication. Allow password authentication only if the connection between PNS and AS is secured (see Section 15.3.2, Authentication of PNS services with AS for details).

        • S/Key: It is the S/Key-based authentication.

        • CryptoCard RB1: It defines cryptoCard RB1 hardware token based authentication.

        • LDAP Bind: It is authentication against the target LDAP server. Only password authentication is supported by this method, therefore it is only available if the connection between AS and PNS is secured with SSL.

        • GSSAPI/Kerberos5: It defines GSSAPI-based authentication. The Principal name representing this authentication service also has to be set.

        • x.509: It is authentication based on x.509 certificates. To use this method, a number of further options have to be specified:

          • It is the CA issuing the client certificates. This can be an internal CA group (managed by the PNS PKI, see Chapter 11, Key and certificate management in PNS for details), or an external one. In the latter case the locations of the trusted CA certificates and the corresponding CRLs have to be set as space-separated lists of file:// or ldap:// URLs.

          • Compare to stored certificate: Compare the stored certificate bit-by-bit to the certificate supplied by the client. The authentication will fail when the certificates do not match, even if the new certificate is trusted by the CA.

          • Verify trust: Verify the validity of the certificate (that is, the certificate has to be issued by one of the trusted CAs and must not be revoked). This verification is independent from the Compare to stored certificate, so if both parameters are set, both conditions must be fulfilled to accept the certificate.

          • Verify depth: The maximum length of the verification chain.

          • Offer trusted CA list: Send a list of trusted certificates to the client to choose from to narrow the list of available certificates.

          • Accept only AA connections: By default, AS accepts connections only from Authentication Agents (AA). Disable this option if you are using a different client to authenticate on AS, for example, if a web-browser authenticates using a client-side certificate.

            Disabling this option works only with proxies that support inband authentication, for example, HTTP.

      The htpass backend

      The htpass backend authenticates users against an Apache htpasswd style password file. The name (including the path) of the file to be used has to be specified in the Filename textbox. Authentication faking can be enabled by selecting the Fake user checkbox.

      The htpass backend

      Figure 15.8. The htpass backend


      The Pluggable authentication module (PAM) backend

      The PAM backend implements authentication based on the local authentication settings of the host running AS. It basically authenticates the users against the local PAM installation and/or using GSSAPI/Kerberos5.

      The PAM backend

      Figure 15.9. The PAM backend


      The PAM backend has the following parameters:

      • Enable PAM authentication: Enable PAM authentication. For PAM authentication the PAM service used for authentication has to be specified.

      • GSSAPI/Kerberos5: Enable GSSAPI based authentication. The Principal name representing this authentication service also has to be set.

      • Use local accounts: Use the local passwd/group database to query group membership of a given account.

      Authentication faking can be enabled by selecting the Fake user checkbox.

      The RADIUS backend

      The RADIUS backend has the following parameters:

      • Host: It is the hostname of the RADIUS server.

      • Port: It is the port of the RADIUS server.

      • Secret: It is the shared secret between the authentication server and AS.

      Authentication faking can be enabled by selecting the Fake user checkbox.

      The RADIUS backend

      Figure 15.10. The RADIUS backend


      15.3.1.2. Configuring routers

      Routers are simple conditional rules (that is, if-then expressions) that determine which instance has to be used to authenticate a particular connection. They consist of a condition and a corresponding instance: if the parameter of the connection matches the set condition, then the authentication is performed with the set instance. The condition consists of a variable and a pattern: the condition is true if the variable of the connection is equal to the specified pattern. Routers can be configured in the Routers section of the Authentication Server MC component. They are evaluated sequentially: if the incoming connection matches a router, authentication is performed according to the instance specified in the router, otherwise the next router is evaluated. For configuring a new router only the condition has to be specified and the backend instance selected. The exact procedure is as follows:

      1. Navigate to the Authentication Server MC component, and click New in the Routers section of the tab.

        Defining new routers

        Figure 15.11. Defining new routers


      2. Select the instance that will authenticate the connections matching this router from the Target instance combobox.

        Configuring a new router

        Figure 15.12. Configuring a new router


      3. Click on New, and define a condition for the router. Select the variable to be used from the Variable combobox, and enter the search term to the Value field. If the Variable of the inspected connection matches Value, the instance specified in Target instance will authenticate the connection.

        Currently the following variables can be used to create conditions: Client IP, Client zone, Service, and User.

        Defining conditions

        Figure 15.13. Defining conditions


        Note

        A router can contain multiple conditions. In this case all specified conditions must be true to select the target instance. (that is, the conditions are connected with logical AND operations.)

        Using multiple conditions in a router

        Figure 15.14. Using multiple conditions in a router


      15.3.2. Authentication of PNS services with AS

      AS can only authenticate connections for which PNS services explicitly request authentication. To allow this, the connection between PNS and AS has to be set up. This requires configuring some connection parameters both in PNS and in AS. The procedure below describes how to configure these parameters.

      15.3.2.1. Procedure – Configuring communication between PNS and AS

      1. First, the connection settings of AS have to be configured in the Bind section on the Authentication server MC component. Specify the IP address/port pair on which AS should accept connections.

        Configuring the bind parameters of AS

        Figure 15.15. Configuring the bind parameters of AS


        Tip

        If AS and PNS are running on the same machine, use the local loopback interface (IP:127.0.0.1).

        Note

        The same bind settings will have to be used when the Authentication provider is configured in the Policies tab of Application-level Gateway MC component.

      2. If PNS and AS are running on separate machines, enable and configure SSL encryption. Check the Require SSL for incoming connections checkbox and click on ... next to the Certificate textbox and select a certificate. This certificate has to be available on the AS host and will be presented to PNS to verify the identity of the AS server. For details about creating certificates, see Procedure 11.3.8.2, Creating certificates.

        Configuring the SSL for AS

        Figure 15.16. Configuring the SSL for AS


        To enable mutual authentication (that is, to verify the certificate of PNS), check the Verify peer certificate checkbox and select the CA group containing the trusted certificates. Also make sure to set the Verify depth high enough so that the root CA certificate in the CA chain can be verified. The default value (3) should be appropriate for internal CAs.

      3. The connection also has to be set up from the PNS side. This can be accomplished by creating an Authentication provider on the Policies tab of the Application-level Gateway MC component. Click on New, select Authentication provider from the Policy type combobox, and enter a name for the provider into the Policy textbox.

        Creating an Authentication provider

        Figure 15.17. Creating an Authentication provider


      4. Enter the IP address of the AS server into the Address field. This must be the same address as specified as Bind address for AS in Step 1.

        Configuring an Authentication provider

        Figure 15.18. Configuring an Authentication provider


      5. If SSL encryption was enabled in Step 2, select the Certificate PNS will show to AS. PNS can also verify the certificate shown by AS using the CAs specified in CA group.

        Configuring SSL for an Authentication provider

        Figure 15.19. Configuring SSL for an Authentication provider


        Note

        Obviously, the CAs issuing the certificates of PNS and AS must be members of the CA groups set to be used to perform the verification of the certificates, otherwise the verification will fail.

      Now an Authentication policy has to be set up. Authentication policies are used by PNS services and specify which authentication provider is used by the service, the type of authentication (inband, outband), and caching parameters. An authentication policy can be used by multiple services.

      15.3.2.2. Procedure – Configuring PNS Authentication policies

      1. Create an Authentication policy on the Policies tab of the Application-level Gateway MC component. Click on New, select Authentication policy from the Policy type combobox, and enter a name for the policy into the Policy textbox.

        Creating Authentication policies

        Figure 15.20. Creating Authentication policies


      2. Select the Authentication provider combobox by clicking ... and selecting a provider.

        Selecting the Authentication provider

        Figure 15.21. Selecting the Authentication provider


      3. Select the type of authentication to be used from the Class combobox. The following authentication types are available:

        Selecting the type of the authentication

        Figure 15.22. Selecting the type of the authentication


        • Inband authentication: Use the built-in authentication of the protocol to authenticate the client on PNS.

        • Authentication Agent: Outband authentication using the Authentication Agent. This method can authenticate any protocol. For agent authentication the following additional parameters have to be set:

          • Certificate: Select the certificate that PNS will show to the Authentication Agent running on the client. The certificate is required because the communication between the Authentication Agent and PNS is SSL-encrypted. The certificate has to be issued by a CA trusted by the Authentication Agent. The process of installing CA certificates for the Authentication Agent is described in Chapter 6, Installing the Authentication Agent (AA) in Proxedo Network Security Suite 2 Installation Guide.

          • Port: The port where PNS accepts connections from the Authentication Agents running on the clients.

          • Timeout: The period of time the client has to complete the authentication after an authentication request is sent by PNS.

        • Server authentication: Enable the client to connect to the target server, and extract its authentication information from the protocol.

      4. Configure the authentication cache using the Class combobox of the Authentication cache section. The following options are available:

        Configuring the authentication cache

        Figure 15.23. Configuring the authentication cache


        • None: Disable authentication caching. The client has to reauthenticate each time when starting a new service.

        • AuthCache: Store the results of the authentication for the period specified in the Timeout field, that is, after a successful authentication the client can use the service (and start new ones of the same type) for that period. For example, once, being authenticated for an HTTP service, the client can browse the web for Timeout period, but has to authenticate again to use FTP.

          If the Update timeout for each session checkbox is selected, timeout measuring is restarted each time the client starts service. Selecting the Consider all services equivalent checkbox means that PNS does not differentiate between the different services (protocols) used by the client, after a successful authentication he can use all available services without having to reauthenticate himself. For instance, if this option is enabled in the example above, the client does not have to reauthenticate for starting an FTP connection.

      To actually use the authentication policy configured above, the PNS services have to reference the policy.

      Using authentication in PNS services

      Figure 15.24. Using authentication in PNS services


      The authentication policy to be used by the service can be selected from the Authentication policy combobox on the Instances tab of the Application-level Gateway MC component. The combobox displays all the available authentication policies.

      15.3.3. Authorization of PNS services

      Each PNS service can use an Authorization policy to determine whether a client is allowed to access the service. If the authorization is based on the identity of the client, it takes place only after a successful authentication — identity-based authorization can be performed only if the client's identity is known and has been verified. The actual authorization is performed by PNS, based on the authentication information received from AS or extracted from the protocol. PNS offers various authorization models to ranging from simple (PermitUser) to advanced (NEyesAuthorization). Both identity-based and indentity-independent authorization models are available. The configuration of authorization policies is described in the procedure below.

      15.3.3.1. Procedure – Configuring authorization policies

      1. Create an Authorization policy on the Policies tab of the Application-level Gateway MC component. Click on New, select Authorization policy from the Policy type combobox, and enter a name for the policy into the Policy textbox.

        Creating authorization policies

        Figure 15.25. Creating authorization policies


      2. Select the authorization model to use in the policy from the Class combobox. The following models are available:

        Selecting an authorization model

        Figure 15.26. Selecting an authorization model


        • BasicAccessList: Authorize only users meeting a set of authorization conditions, for example, certain users, users belonging to specified groups, or any combination of conditions using the other authorization models.

        • NEyesAuthentication: The client trying to access the service has to be authorized by one (or more) authorized clients. This model can be used to implement 4-eyes authorization solutions.

        • PairAuthentication: Authorize only userpairs — single users cannot access a service, that is, only two different users (with different usernames) can access the service.

          Tip

          NEyesAuthentication and PairAuthentication are useful when the controlled access to sensitive (for example, financial) data has to be ensured and audited.

        • PermitGroup: Authorize only the members of the listed usergroups. This is a simplified version of the BasicAccessList model.

        • PermitUser: Authorize only the listed users. This is a simplified version of the BasicAccessList model.

        • PermitTime: Authorize any user but only in the set time interval. This authorization model does not require authentication.

          Tip

          Use the BasicAccessList authorization model to combine user authentication with time-based authentication. For example, create a policy consisting of two Required policies: PermitTime and PermitUser.

      3. Configure the parameters of the selected authorization class. See Section 15.3.3.2, Authorization models of PNS for the detailed description of the classes.

        Configuring authorization policies

        Figure 15.27. Configuring authorization policies


      4. Navigate to the Instances tab of the Application-level Gateway MC component, and select the service that will use the authorization policy.

        Using authorization policies in PNS services

        Figure 15.28. Using authorization policies in PNS services


      5. In the Service parameters section, select the Authorization policy to use from the combobox.

      15.3.3.2. Authorization models of PNS

      The configuration parameters of the authorization models available in PNS are described in this section.

      BasicAccessList

      BasicAccessList can be used to create complex authorization scenarios by combining other authorization types into a set of Required or Sufficient conditions. Each condition refers to an authorization type (for example, PermitUser, PairAuthorization, and so on) and Authentication task (Sufficient or Required). The conditions are evaluated sequentially. A connection is authorized if a Sufficient condition matches the connection, or all Required conditions are fulfilled. If a Required condition is not met, the connection is refused.

      Using BasicAccessList

      Figure 15.29. Using BasicAccessList


      Note

      Due to the sequential evaluation of the conditions, Sufficient conditions shall be placed to the top of the list.

      Creating BasicAccessList conditions

      Figure 15.30. Creating BasicAccessList conditions


      To create a new condition click New, and select the type of the condition from the Authentication task and Condition comboboxes. Then click on New, and enter the value for the condition (for example, the username or the name of the group).

      Example 15.1. BasicAccessList

      The following condition list allows the admin user and the users who are members of both group_a and group_b to access the service:

      • Authentication task: Sufficient, Condition: User, Value: admin

      • Authentication task: Required, Condition: Group, Value: group_a

      • Authentication task: Required, Condition: Group, Value: group_b

      NEyes authorization

      When NEyesAuthorization is used, the client trying to access the service has to be authorized by another (already authorized) client (this authorization chain can be expanded to multiple levels). NEyesAuthorization can only be used in conjunction with another NEyesAuthorization policy. One of them is the authorizer set to authorize the authorized policy.

      In a simple 4-eyes scenario the authorizer policy points to the authorized policy in its Authorization policy parameter, and has its Wait for other authorization policies to finish parameter disabled. The authorized policy has an empty Authorization policy parameter (meaning that it is at lower the end of an N-eyes chain), and has its Wait for other authorization policies to finish parameter enabled, meaning that it has to be authorized by another policy.

      NEyesAuthorization has the following parameters:

      • authorize_policy: The authorization policy authorized by the current NEyesAuthorization policy.

      • Wait for other authorization policies to finish: If this parameter is set, the client has to be authorized by another client. If set to FALSE, the current client is at the top of an authorizing chain.

        Note

        When setting this parameter, consider the timeout value set in the client application used to access the server. There is no use in specifying a longer time here, as the clients will time out anyway.

      • Max time for the authorization to arrive: The time (in milliseconds) PNS will wait for the authorizing user to authorize the one accessing the service.

      Pair authorization

      When this authorization model is used, only two users simultaneously accessing the service are authorized, single users are not permitted to access the service. Set the time (in milliseconds) PNS will wait for the second user to access the service using the Max time for the pair to arrive spinbutton.

      When setting this parameter, consider the timeout valuse set in the client application used to access the server. There is no use in specifying a longer time here, as the clients will time out anyway.

      PermitGroup

      This model allows the members of the specified groups to access the service. Select the grouplist parameter and click Edit. In the appearing list editor window click New, then enter the name of an authorized usergroup. Additional groups can be added by clicking New again.

      Using PermitGroup

      Figure 15.31. Using PermitGroup


      Note

      The elements of the list can be disabled through the local menu.

      PermitUser

      This model allows the listed users to access the service. Select the userlist parameter and click Edit. In the appearing list editor window click New, then enter the name of an authorized user. Additional users can be added by clicking New again.

      Using PermitUser

      Figure 15.32. Using PermitUser


      Note

      The elements of the list can be disabled through the local menu.

      PermitTime

      The PermitTime policy stores a set of intervals specified by their starting and ending time — access to a service using such a policy is permitted only within this interval.

      To configure an interval, click on Edit, then click New. Select the first qstring value and click on Edit. Enter the starting time of the interval (for example, 8:30) and click Ok. The ending time of the interval can be set similarly through the second qstring value.

      Note

      A single policy can contain multiple intervals.

      15.3.4. Configuring the Authentication Agent

      The Authentication Agent has to be installed on the client machines when outband authentication is used on the network. For detailed instructions about how to install the Authentication Agent, see Chapter 6, Installing the Authentication Agent (AA) in Proxedo Network Security Suite 2 Installation Guide and Authentication Agent Manual.

      15.4. Logging in AS

      Logging in AS can be configured in the Logging section of the Authentication server MC component. The parameters related to logging in AS are the following:

      Configuring logging in AS

      Figure 15.33. Configuring logging in AS


      • Log level: It is the verbosity level of the logs. Level 3 is the default value, and is usually sufficient. Log level 0 does not produce log messages, while log level 10 logs every small event, and shall only be used for debugging purposes.

      • Trust connection: This parameter permits password-based authentication methods even for unencrypted connections. The default value is: 0 (false).

        If this parameter is ON, the password is accepted even if the connection between PNS and AS is not based on Transport Layer Security (TLS), otherwise it is not.

      • Thread limit: This value defines the number of threads AS can start. The graphical user interface now sets the AS default Thread limit to 100. It is possible though to set a different value for the Thread limit. Set this value according to the anticipated number of stacked connections. If the Thread limit is too low, the PNS proxies stacking AS will experience delays and refused connection attempts. A suggested method to calculate this number, is to monitor the log for "Too many running threads, waiting for one to become free" line and to increase PNS/AS/CF Thread limit parameter accordingly.

        Note, that if the Thread limit is already set with the AS "--threads=" option in the init script, then that option takes precedence. The value defined in the init script is cleared with each upgrade though, therefore this value can easily be updated at each upgrade to the preferred value, defined in the GUI.

      Chapter 16. Virtual Private Networks

      This chapter explains how to build encrypted connections between remote networks and hosts using Virtual Private Networks (VPNs).

      16.1. Virtual Private Networking basics

      Computers and even complete networks often have to be connected across the Internet, like in the case of organizations having multiple offices or employees doing remote work. In such situations it is essential to encrypt the communication to prevent anyone unauthorized from obtaining sensitive data. Virtual Private Networks (VPNs) solve the problem of communicating confidentially over an untrusted, public network.

      VPNs retain the privacy, authenticity, and integrity of the connection, and ensure that the communication is not eavesdropped or modified. VPN traffic is transferred on top of standard protocols over regular networks (for example, the Internet) by encapsulating data and protocol information of the private network within the protocol data of the public network. As a result, nobody can recover the tunneled data by examining the traffic between the two endpoints.

      Virtual Private Networks

      Figure 16.1. Virtual Private Networks


      VPNs are commonly used in the following situations:

      • to connect the internal networks of different offices of an organization

      • to allow remotely-working employees access to the internal network

      • to transfer unencrypted protocols in a secure, encrypted channel — without having to modify the original protocol

      • to secure Wi-Fi networks

      16.1.1. Types of VPN

      Different VPN solutions use different methods to encrypt the communication. The main VPN types are IPSec, SSL/TLS, PPTP, and L2TP, with each type having many different implementations. PNS supports the following VPN solutions:

      • IPSec (strongSwan)

      • SSL (OpenVPN)

      16.1.2. VPN topologies

      The topology of a VPN determines what is connected using the VPN. The basic VPN topologies are the following:

      • Peer-to-Peer: It connects two hosts. (It is also called Point-to-Point VPN.)

      • Peer-to-Network: It connects a single host to a network. This is the most common VPN topology, regularly used to allow remote workers access to the intranet of the organization. (it is also called Point-to-LAN VPN.)

      • Network-to-Network: It completely connects two subnetworks. This solution is commonly used to connect the local networks of an organization having multiple offices. (it is also called LAN-to-LAN VPN.)

      In every case, the VPN tunnel is created between two endpoints: the connecting hosts, or the firewall of the connecting network. The IP addresses of the connected networks or hosts can be fix (Fix IP connections) or dynamic (so called Roadwarrior connections). Roadwarrior connections are typically Peer-to-Network connections, where many peers (roadwarrior clients) can access the protected network.

      16.1.3. The IPSec protocol

      IP Security (IPSec) is a group of protocols that authenticate and encrypt every IP packet of a data stream. IPSec operates at the network layer of the OSI model (layer 3), so it can protect both TCP and UDP traffic. IPSec is also part of IPv6.

      IPSec uses the Encapsulating Security Payload (ESP) and Authentication Header (AH) protocols to secure data packets, and the Internet Key Exchange (IKE) protocol to exchange cryptographic keys. IPSec has the following two modes:

      • Transport mode: It is used to create peer-to-peer VPNs. Only the data part of the IP packet is encrypted, the header is not modified.

      • Tunnel mode: It builds a complete IP tunnel to create Network-to-Network VPNs.

      The IPSec implementation used by PNS has two main components. Pluto is a userspace application responsible for the key exchange when building VPN connections. KLIPS is a kernel module that handles the encryption and transmission of the tunneled traffic after the VPN connection has been established.

      16.1.4. The OpenVPN protocol

      OpenVPN creates a VPN between the endpoints using an SSL/TLS channel. OpenVPN operates at the TCP layer of the OSI model (layer 4). The SSL channel is usually created using UDP packets, though it is possible to use TCP. Using SSL enables the endpoints to authenticate each other using certificates.

      The OpenVPN server can 'push' certain parameters to the clients, for example, IP addresses, routing commands, and other connection parameters. OpenVPN transfers all communication using a single IP port.

      The connecting clients receive an internal IP address, similarly to DHCP. This IP address is valid only within the VPN tunnel, and usually belongs to a virtual subnet.

      OpenVPN creates VPN tunnels between virtual interfaces. These interfaces have internal IP addresses that are independent from the IP addresses of the physical interfaces, and are visible only from the VPN tunnels.

      OpenVPN runs completely in userspace; the user does not need special privileges to use it. The kernel running on the host must support the virtual interfaces used to create the VPN tunnels.

      The operation of OpenVPN

      Figure 16.2. The operation of OpenVPN


      16.2. Using VPN connections

      VPN connections can be configured using the VPN MC component. Before starting to configure VPN connections, add this component to the host (see Procedure 3.2.1.3.1, Adding new configuration components to host for details).

      Using VPN connections

      Figure 16.3. Using VPN connections


      Use the New, Delete, and Edit buttons to create, remove, or rename VPN connections. Clicking on Control displays a drop-down menu to start, stop, or restart the selected connections.

      The VPN MC component automatically creates the required ipsec and tun interfaces for the configured VPN tunnels. Use these interfaces to define PNS services that can be accessed through the VPN tunnel. Firewall rules can use these interfaces like a regular, physical network interface. The general procedure of using VPNs is as follows:

      16.2.1. Procedure – Using VPN connections

      1. Create the certificates required for authentication using the PNS PKI. See Procedure 11.3.8.2, Creating certificates for details.

      2. Configure the VPN tunnel using the VPN MC component. See Section 16.3, Configuring IPSec connections and Section 16.4, Configuring SSL (OpenVPN) connections for details.

        Tip

        To create Peer to Peer or Network to Network connections, use IPSec.

        To create Roadwarrior servers, use SSL.

      3. Create services that can be accessed from the VPN tunnel using the PNS MC component.

      4. Configure the remote endpoints (for example, roadwarrior clients) that will access the VPN tunnel. This process may involve installing VPN client software and certificates, and so on.

      16.3. Configuring IPSec connections

      This section explains how to configure IPSec VPN connections.

      16.3.1. Procedure – Configuring IPSec connections

      1. Navigate to the VPN component of the PNS host that will be the endpoint of the VPN connection. Select the Connections tab.

        Configuring IPSec connections

        Figure 16.4. Configuring IPSec connections


      2. Click New and enter a name for the connection.

      3. Select the IPSec protocol option.

      4. Set the VPN topology and the transport mode in the Scenario section on the General tab.

        • To create a Peer-to-Peer connection, select the Peer to Peer and the Transport options.

        • To create a Peer-to-Network connection, select the Peer to Peer and the Tunnel options.

        • To create a Roadwarrior server, select the Roadwarrior server and the Transport options.

        • To create a Network-to-Network connection, select the Peer to Peer and the Tunnel options.

        Note

        When creating a Network-to-Network connection, the two endpoints of the VPN tunnel do NOT use the VPN to communicate with each other. To encrypt the communication of the endpoints, create a separate Peer-to-Peer connection.

        Selecting the IPSec scenario

        Figure 16.5. Selecting the IPSec scenario


      5. Configure the local networking parameters.

        These parameters affect the PNS endpoint of the VPN connection. Set the following parameters:

        • Local address: Select the IP address that PNS will use for the VPN connection.

        • Local ID: It is the ID of the PNS endpoint in the VPN connection. Leave this field blank unless you experience difficulties in establishing the connection with the remote VPN application. If you set the Local ID, you might also want to set the Use ID in ipsec.secrets option.

        • Local subnet: It is the subnet behind PNS that will be accessible using the VPN tunnel. This option is available only for Peer-to-Network and Network-to-Network connections.

        Configuring local networking parameters

        Figure 16.6. Configuring local networking parameters


      6. Configure the networking parameters of the remote endpoint. Set the following parameters:

        • Remote address: It is the IP address of the remote endpoint. It does not apply for roadwarrior VPNs.

        • Remote ID: It is the ID of the remote endpoint in the VPN connection. Leave this field blank unless you experience difficulties in establishing the connection with the remote VPN application. If you set the Remote ID, you might also want to set the Use ID in ipsec.secrets option.

        • Remote subnet: It is the subnet behind the remote endpoint that will be accessible using the VPN tunnel. This option is available only for Peer-to-Network and Network-to-Network connections.

          Note

          Network-to-Network connections connect the subnets specified in the Local subnet and Remote subnet parameters.

          Do not specify the subnet parameter for the peer side of Peer-to-Network connections, leave either the Local subnet or the Remote subnet parameter empty.

        Configuring remote networking parameters

        Figure 16.7. Configuring remote networking parameters


      7. When configuring Peer-to-Peer or Network-to-Network connections, it is crucial that the endpoint operators cooperate. If the Active side option is selected, PNS opens the VPN connection to the remote endpoint. It is possible to enable the Active side option on both sides, but if the tunnel is unstable, it is recommended to enable it only on one side.

      8. Click on the Authentication tab and configure authentication.

        Configuring authentication

        Figure 16.8. Configuring authentication


        To use password-based authentication, select the Shared secret option and enter the password in the Secret field.

        Note

        Authentication using a shared secret is not a secure authentication method. Use it only if the remote endpoint does not support certificate-based authentication. Always use long and complicated shared secrets: at least twelve characters containing a mix of alphanumerical and special characters. Remember to change the shared secret regularly.

        To use certificate-based authentication, select the X.509 option and set the following parameters:

        • Local certificate: Select a certificate available on the PNS host. PNS will show this certificate to the remote endpoint.

        • If the remote endpoint has a specific certificate, select the Verify certificate option and select the certificate from the Remote certificate field. PNS will use this certificate to verify the certificate of the remote endpoint.

        • If there are several remote endpoints that can connect to the VPN tunnel, select the Verify trust option and select the trusted Certificate Authority (CA) group containing the CA certificate of the CA that issued the certificates of the remote endpoints from the CA group field. PNS will use this trusted CA group to verify the certificates of the remote endpoints. (See Section 11.3.7, Trusted CAs for details.)

          PNS sends the common name of the accepted CAs to the remote endpoint, so the client knows what kind of certificate is required for the authentication. Select a specific CA certificate using the CA hint option if you want to accept only certificates signed by the selected CA.

        Note

        See Chapter 11, Key and certificate management in PNS for details on creating and importing certificates, CAs, and trusted CA groups required for certificate-based authentication.

      9. Before setting the action status of the Dead Peer Detection option, it is necessary that the two endpoint operators agree on the preferred settings. If earlier the Active side option was selected for PNS, it is recommended to select the restart option of Action parameter. This way PNS attempts to restart the VPN connection if the remote endpoint becomes unavailable.

        If PNS is on the passive side and earlier the Active side option was not enabled, it is recommended to set the Action parameter of the Dead Peer Detection to hold for PNS and set this parameter to restart on the remote endpoint.

        Note

        Dead Peer Detection is effective only if enabled on both endpoints of the VPN connection. If Dead Peer Detection is enabled only on one side, and it is disabled on the other side it may lead to unreliable VPN connection. If Dead Peer Detection is not required, it must be disabled at both endpoints.

        Configuring IPSec options

        Figure 16.9. Configuring IPSec options


        The following additional parameters can be configured for Dead Peer Detection:

        • Delay

          This parameter defines the time interval in which informal messages are sent to the peer.

        • Timeout

          This parameter defines the timeout interval after which all connections to a peer are deleted in case of inactivity.

        • Action

          This parameter controls the usage of Dead Peer Detection protocol, where informal messages are periodically sent to check whether the connection toward the IPSec peer is live or not.

          The available values are: clear, restart and none.

          The values clear, hold and restart activate Dead Peer Detection and instruct on the action to be taken in case of timeout.

          If the parameter is set to clear, the connection shall be closed without any further action taken.

          If the parameter is set to hold, matching traffic will be searched for and renegotiation on the connection will be tried.

          If the parameter is set to restart, an immediate attempt will take place for renegotiating the connection.

          If the parameter is set to none, no more Dead Peer Detection messages will be sent to the peer.

      10. Set other options if needed. See Section 16.3.2, IPSec options for details.

      11. Configure the parameters of the Keying tab, if necessary.

        Keying tab parameters

        Figure 16.10. Keying tab parameters


        • Encapsulating Security Payload (ESP)

          This list presents the Encapsulating Security Payload (ESP) encryption and authentication algorithms that shall be used for the actual connection.

          If the DH group is also specified, it defines that Diffe-Hellman (DH) exchange shall be included in re-keying or in initial negotiation.

          The ESN parameter defines whether Extended Sequence Number (ESN) support with the peer is enabled or not. The default value is 'no'.

        • Internet Key Exchange (IKE)

          This list presents the Internet Key Exchange (IKE) encryption and authentication algorithms that shall be used for the actual connection.

          If the DH group is also specified, it defines that Diffe-Hellman exchange shall be included in re-keying or in initial negotiation.

          If no Pseudo Random Function (PRF) algorithm is configured, the algorithms defined for integrity are proposed as PRF.

      16.3.2. IPSec options

      Global parameters that apply to every IPSec VPN connection of the PNS host can be set on the Global options tab.

      Set special options of a particular IPSec VPN connection on the Connections tab and the Options and Keying submenu tabs.

      Besides the Dead Peer detection parameters, as introduced in Section 16.3, Configuring IPSec connections, there are additional parameters that can be configured Under the Options tab.

      Configuring 'Common options' parameters at IPSec Options tab

      Figure 16.11. Configuring 'Common options' parameters at IPSec Options tab


      Common options

      • Use IPComp compression

        If this parameter is enabled, that is the parameter is checked in, the daemon accepts compressed and uncompressed data as well. If the parameter is not enabled, that is, the parameter is not checked in, the daemon accepts only uncompressed data.

      • Exchange method

        The available values are: ikev1, ikev2.

        The key exchange method can be selected here for initializing the connection, that is ikev1 or ikev2.

      • Close action

        The available values are: none, clear, hold and restart.

        This parameter defines the action to take if the remote peer unexpectedly closes. This parameter is not supported for ikev1 connections.

      • Fragmentation

        The available values are: yes, accept, force and no.

        This parameter enables Internet Key Excahnge (IKE) fragmentation. Note that fragmented messages arriving from a peer are always processed, regardless of this parameter option.

        If this parameter is set to yes, that is, checked in, and the peer supports it, any oversized IKE message will be fragmented.

        If the parameter is set to accept fragmented content is supported arriving from the peer, yet the daemon does not send fragmented messages.

        If the parameter is set to force the initial IKE message will be fragmented.

      • Additional options

        This parameter enables the user to provide any additional StrongSwan parameter manually that is not available in the GUI. Also, if the configuration of a parameter is available though in the GUI, but the required setting is not, it can be manually defined here. Even already defined parameter configuration settings can be overwritten at Additional options, as the configuration will use the latest definition of the parameter. The parameters have to be provided in the format described on the StrongSwan documentation site, available at:

        https://wiki.strongswan.org/projects/strongswan/wiki/ConnSection.

      The Keying parameters section of the Options tab specifies key-handling and key-exchange parameters. Modify these parameters only if it is necessary for compatibility with the remote endpoint.

      Configuring keying parameters at IPSec Options tab

      Figure 16.12. Configuring keying parameters at IPSec Options tab


      Note

      Do not modify these options unless it is required and it is perfectly clear how these parameter settings affect the configuration.

      The options of the Keying tab specify the encryption used in the connection.

      Keying parameters

      • Key life

        This parameter defines how long the key connection shall last, from the negotiation until expiry.

      • Key tries

        This parameter defines the number of attempts for negotiating or renegotiating the connection.

      • IKE lifetime

        This parameter defines the length of the keying channel connection before it is renegotiated.

      • Rekey

        Enabling this parameter requires the connection to be renegotiated when it is about to expire.

        Disabling this parameter will result in the daemon not requesting renegotiation, nevertheless, it does not prevent from responding to renegotiation requested from the other end.

      16.3.3. Global IPSec options

      The following options apply to every IPSec VPN tunnel. These settings are available on the Global options tab.

      • Verbose IKE: Include log messages of the Internet Key Exchange (IKE) protocol in the logs.

      • Cache CRLs: This parameter can be set to ON, that is cachecrls=yes, or to OFF, that is cachecrls=no. If Certificate Revocation List (CRL) caching is enabled, local caching of CRLs is activated and no new CRL is picked up until the locally cached CRL has expired. The cached CRL is stored in /etc/ipsec.d/crls under a unique filename. As soon as it has expired, it is replaced with an updated CRL.

      • Strict CRL policy: The CRL handling policy is quite tolerant by default, that is, the strictcrlpolicy is set to no by default. Consequently, in case a CRL is expired, only a warning is issued and another peer CRL is automatically accepted. If a more strict CRL policy is required, this parameter has to be enabled here, the strictcrlpolicy parameter will be set to yes. If the parameter strictcrlpolicy is enabled, no certificate will be accepted from a peer until no corresponding CRL is present in /etc/ipsec.conf. If this parameter is enabled it is crucial therefore to make sure that the CRLs are updated in time.

      For details on the other options, see the strongSwan documentation available at http://wiki.strongswan.org/.

      16.4. Configuring SSL (OpenVPN) connections

      This section explains how to configure SSL VPN connections.

      16.4.1. Procedure – Configuring SSL connections

      1. Navigate to the VPN component of the PNS host that will be the endpoint of the VPN connection. Select the Connections tab.

        Configuring SSL (OpenVPN) connections

        Figure 16.13. Configuring SSL (OpenVPN) connections


      2. Click New and enter a name for the connection.

      3. Select the SSL protocol option.

      4. Set the VPN topology in the Scenario section.

        Selecting the SSL (OpenVPN) scenario

        Figure 16.14. Selecting the SSL (OpenVPN) scenario


        To create a Roadwarrior server, select the Roadwarrior server option.

        Select the Peer to Peer option for other topologies.

        Note

        When creating a Network-to-Network connection, the two endpoints of the VPN tunnel are not used to communicate with each other. To encrypt the communication of the endpoints, create a separate Peer-to-Peer connection.

      5. Configure the local networking parameters. These parameters affect the PNS endpoint of the VPN connection.

        Configuring local networking parameters

        Figure 16.15. Configuring local networking parameters


        Set the following parameters in the Listen options section:

        • Local address: Select the IP address that PNS will use for the VPN connection. If PNS should accept incoming VPN connections on every interface, enter the 0.0.0.0 IP address.

        • Port: The port PNS uses to listen for incoming VPN connections. Use the default port (1194) if nothing restricts that.

          Note

          These parameters have no effect if PNS is the client-side of a VPN tunnel and does not accept incoming VPN connections.

        Set the following parameters in the Tunnel settings section:

        • Interface: The name of the virtual interface used for the VPN connection. MS automatically assigns the next available interface.

        • Local: The IP address of PNS as seen from the VPN tunnel. The tun interface will bind to this address, so PNS rules can use this address.

        • Remote: The IP address of the remote endpoint as seen from the VPN tunnel.

        • By default, the VPN connections use the UDP protocol to communicate with the peers. To use the TCP protocol instead, select Protocol > TCP.

        The Local and Remote addresses must be non-routable virtual IP addresses (for example, from the 192.168.0 0 range). These IP addresses are visible only on the tun interface, and are needed for building the VPN tunnel.

        Warning

        The Local and Remote addresses must be specified even for roadwarrior scenarios. Use the first two addresses of the dynamic IP range used for the remote clients.

      6. Configure the networking parameters of the remote endpoint.

        Configuring remote networking parameters

        Figure 16.16. Configuring remote networking parameters


        For Peer-to-Peer scenarios, set the following parameters:

        • Remote address: The IP address of the remote endpoint.

        • Remote port: The port that PNS connects on the remote VPN server. Use the default port (1194) if nothing restricts that.

        • Pull configuration: Download the configuration from the remote endpoint. (Works only if the remote endpoint has its push options specified.)

        • No local bind: Select this option if the PNS host that you are configuring should run in client-mode only, without accepting incoming VPN connections.

        When PNS acts as a roadwarrior server, set the IP address range using the Dynamic address from and Dynamic address to fields. Clients connecting to PNS will receive their IP addresses from this range.

        Note

        The configured address range cannot contain more than 65535 IP addresses.

        Every Windows client needs a /30 netmask (4 IP addresses). Make sure to increase the available address range when you have many Windows clients.

      7. When configuring Peer-to-Peer or Network-to-Network connections, select the Active side option so that PNS initiates the VPN connection to the remote endpoint. If possible, enable this option on the remote endpoint as well.

      8. Click on the Authentication tab and configure authentication.

        Configuring authentication

        Figure 16.17. Configuring authentication


        Set the following parameters:

        • Certificate: Select a certificate available on the PNS host. PNS will show this certificate to the remote endpoint.

        • CA: Select the trusted (Certificate Authority) CA group that includes the certificate of the root CA that issued the certificate of the remote endpoint. PNS will use this CA group to verify the certificate of the remote endpoint.

        Warning

        If several remote endpoints use the same certificate to authenticate, only one of them can be connected to PNS at the same time.

        Note

        See Chapter 11, Key and certificate management in PNS for details on creating and importing certificates, CAs, and trusted CA groups required for certificate-based authentication.

      9. Configure routing for the VPN tunnel. Click on the Routing tab, and add a routing entry for every network that is on the remote end of the VPN tunnel (or located behind the remote endpoint). PNS sends every packet that target these networks through the VPN tunnel. To add a new network, click New, and enter the IP address and the netmask of the network.

        Configuring tunnel routing

        Figure 16.18. Configuring tunnel routing


      10. Configure push options on the Push options tab.

        Configuring push options

        Figure 16.19. Configuring push options


        Tip

        Push options are most often used to set the configuration of roadwarrior clients. For example, it can be used to assign a fix IP address to a specific client.

        Configuring client routing

        Figure 16.20. Configuring client routing


        Click Route to add routing entries for the remote endpoint. These routing entries determine which networks protected by PNS are accessible from the remote endpoint.

        See Section 16.4.2.2, Push options for details.

      11. Set other options as needed. See Section 16.4.2, SSL options for details.

      16.4.2. SSL options

      Special options of a particular SSL VPN connection can be set on the Options and the Keying tabs.

      Note

      Do not modify these options unless it is required and you have the necessary expertise.

      Configuring OpenVPN options

      Figure 16.21. Configuring OpenVPN options


      The following options can be set on the Options tab:

      • Keep-alive timeout: PNS pings the remote endpoint periodically. This parameter specifies the time between two ping messages in seconds.

      • Keep-alive delay: The amount of time in seconds until PNS waits for a response to the ping messages. If no response is received within this period, PNS restarts the VPN connection.

      • Verbose: The verbosity level of the VPN tunnel.

      • Compression: Compress the data transferred in the VPN tunnel.

      • Propagate ToS: If enabled and the Type of Service (ToS) parameter of the packet transferred using the VPN is set, PNS sets the ToS parameter of the encrypted packet to the same value.

      • Persistent IP address: Preserve the initially resolved local IP address and the port number across SIGUSR1 or --ping-restart restarts.

      • Persistent TUN Interface: Create a persistent tunnel. Normally TUN/TAP tunnels exist only for the period of time that an application has them open. Enabling this option builds persistent tunnels that live through multiple instantiations of OpenVPN and die only when they are deleted or the machine is rebooted.

      • Duplicate CN: If enabled, multiple clients with the same common name can connect at the same time. If this option is disabled, PNS will disconnect new clients if a client having the same common name is already connected.

      • CCD Exclusive: If enabled, the connecting clients must have a --client-config-dir file configured, otherwise the authentication of the client will fail. This file is generated automatically if the Roadwarrior Server option is enabled on the General tab.

      • Additional options: Enter any additional options you need to set here. Options entered here are automatically appended to the end of the configuration file of the VPN tunnel.

      • SSL engine: Use the specified SSL-accelerator engine.

      • Enable management daemon: Enable a TCP server on an IP port to handle daemon management funtions. The password provided is used by the TCP clients to access management functions.

        While the management port is designed for the programmatic control of the OpenVPN by other applications, it is possible to telnet to the port, using a telnet client in raw mode. Once connected, type help for a list of commands.

      • Handle service manually: Do not start this VPN at boot (omit from the /etc/default/openvpn file). This VPN will be managed by other processes like by keepalived or by monitoring. You will not start or stop this tunnel accidentally with the global control button.

      The options of the Keying tab specify the encryption used in the connection. Modify these parameters only if it is necessary for compatibility with the remote endpoint.

      16.4.2.1. Procedure – Configuring the VPN management daemon

      For details on the OpenVPN management interface, see the management-notes.txt file in the management folder of the OpenVPN source distribution.

      1. To enable the management daemon for a particular SSL VPN connection , select the VPN MC component, select the particular SSL connection, and click the Options tab.

      2. Select Enable management daemon.

      3. Enter the IP address where the daemon will accept management connections into the Server address field. It is strongly recommended that IP be set to 127.0.0.1 (localhost) to restrict accessibility of the management server to local clients.

      4. Enter the port number where the daemon will accept management connections into the Server port field. Note that the IP address:port pair must be unique for every management interface.

      5. Set the path to a file that will store the password to the management daemon. Clients connecting to the management interface will be required to enter the password set in the first line of the password file.

      6. Save your changes and repeat the above steps for other VPN connections if needed.

      16.4.2.2. Push options

      Push options are settings that the remote clients can download from PNS when the VPN tunnel is built.

      Configuring global push options

      Figure 16.22. Configuring global push options


      To set push options that apply for every remote endpoint of the selected VPN connection, double-click the <default> entry.

      Configuring push options

      Figure 16.23. Configuring push options


      The following push options can be set on the Push options tab:

      • Domain: The domain of the network.

      • DNS: Address of the Domain Name Server (DNS).

      • WINS: Address of the Windows Internet Name Service (WINS) Server.

      • NBDD: Address of the NetBIOS Datagram Distribution (NBDD) Server.

      • NBT: Type of the NetBIOS over TCP/IP node. Enter the number corresponding to the selected mode:

        • 1: Send broadcast messages.

        • 2: Send point-to-point name queries to a WINS server.

        • 4: Send broadcast message and then query the nameserver.

        • 8: Query name server and then send broadcast message.

      • Redirect gateway: Sends every network traffic of the remote endpoint through the VPN tunnel. See Section The Redirect gateway option for details.

        Note

        Using the Redirect gateway option means that the remote client will have access only to the services permitted by PNS for the VPN tunnel when the VPN tunnel is active. For example, the client will not be able to surf the Internet using HTTP if PNS allows only POP3 services for the clients connected using the VPN.

      • Explicit exit notify: The remote endpoint sends a message to PNS before closing the VPN tunnel. If this option is disabled, PNS does not immediately notice that an endpoint became unavailable, and error messages might appear in the PNS logs.

      • Additional options: Enter any additional push options you need to set here. Options entered here are automatically appended to the end of the .ccd file of the VPN tunnel. This option can be used for example to set the iroute parameter.

      • Route: Add routing entries for the remote endpoint. These routing entries determine which networks protected by PNS are accessible from the remote endpoint.

      To set push options for a specific remote endpoint, click New and select the certificate of the remote endpoint.

      Note

      Alternatively, you can enter the Unique Name of the endpoint certificate into the Cert field. That way, certificates not available in the PNS PKI system can be used as well.

      Configuring client-specific push options

      Figure 16.24. Configuring client-specific push options


      In this case, the IP addresses visible in the tunnel can also be set, so you an assign a fixed IP address to the client using the Local parameter. Note that the Local and Remote directions are from the client's perspective: Local is the remote client's IP address in the VPN tunnel, while Remote is the IP address of PNS in the VPN tunnel.

      When assigning fixed IP addresses to Windows clients, remember that every Windows client needs a /30 netmask (4 IP addresses). For every client, use an IP pair of the following list as the last octet of the Local and Remote IP addresses:

      [  1,  2] [  5,  6] [  9, 10] [ 13, 14] [ 17, 18]
      [ 21, 22] [ 25, 26] [ 29, 30] [ 33, 34] [ 37, 38]
      [ 41, 42] [ 45, 46] [ 49, 50] [ 53, 54] [ 57, 58]
      [ 61, 62] [ 65, 66] [ 69, 70] [ 73, 74] [ 77, 78]
      [ 81, 82] [ 85, 86] [ 89, 90] [ 93, 94] [ 97, 98]
      [101,102] [105,106] [109,110] [113,114] [117,118]
      [121,122] [125,126] [129,130] [133,134] [137,138]
      [141,142] [145,146] [149,150] [153,154] [157,158]
      [161,162] [165,166] [169,170] [173,174] [177,178]
      [181,182] [185,186] [189,190] [193,194] [197,198]
      [201,202] [205,206] [209,210] [213,214] [217,218]
      [221,222] [225,226] [229,230] [233,234] [237,238]
      [241,242] [245,246] [249,250] [253,254]
      The Redirect gateway option

      Enabling the Redirect gateway push-option overrides the default gateway settings of the remote endpoint and sends every network traffic of the remote endpoint through the VPN tunnel. The remote endpoint can only access the Internet through the VPN tunnel. That way PNS can control what kind of communication (protocols, and so on) can the remote client use while connected to the internal network using the VPN tunnel.

      Normal routing

      Figure 16.25. Normal routing


      Using the Redirect gateway option

      Figure 16.26. Using the Redirect gateway option


      The following flags can be set for the Redirect gateway option, with the Def1 being set as default:

      • Local: Select this option if the end-points of the VPN tunnel are directly connected through a common subnet, such as wireless. Note that in this case PNS does not create a static route for the remote address of the tunnel.

      • Bypass DHCP: Select this option to add a direct route to the DHCP server (if it is non-local) which bypasses the VPN tunnel.

      • Def1: Select this option to override the default gateway by using 0.0.0.0/1 and 128.0.0.0/1 instead of 0.0.0.0/0. That way the original default gateway is overridden but not deleted.

      • Bypass DNS: Select this option to add a direct route to the DNS server(s) (if it is non-local) which bypasses the VPN tunnel.

      Chapter 17. Integrating PNS to external monitoring systems

      Proxedo Network Security Suite 2 allows you to monitor the resources of your PNS hosts using external monitoring tools. PNS can be integrated to the following monitoring environments:

      17.1. Procedure – Monitoring PNS with Munin

      Purpose: 

      To monitor your Proxedo Network Security Suite hosts with Munin, complete the following steps. Using Munin, you can monitor the memory usage of your hosts, and the number of running processes and threads for your PNS instances.

      Steps: 

      1. Login to the host locally, or remotely using SSH. For details on enabling SSH access, see Section 9.4, Local services on PNS.

      2. Install the required packages using the sudo apt install <package-name> command. The package you have to install depends on the role of the host.

        • For PNS hosts, install the PNS-pro-munin-plugins package.

        • For MS hosts, install the MS-munin-plugins package.

        • For CF hosts, install the CF-munin-plugins package.

        • If a host has multiple roles, install every applicable package. For example, if MS and CF are running on the same host, install the MS-munin-plugins and CF-munin-plugins packages.

      3. Install the munin-node package and configure it as needed for your environment. For details, see the Munin documentation.

      4. Login to your MS host using MC, and enable access to the TCP/4949 port. For details, see Section 9.4, Local services on PNS.

      5. Repeat this procedure for every Proxedo Network Security Suite host that you want to integrate into your monitoring system.

      17.2. Procedure – Installing a Munin server on a MS host

      Purpose: 

      If you do not have a separate Munin server, or you do not want to integrate your PNS hosts into your existing Munin environment, you can install the Munin server on a standalone MS host (installing the Munin server on a PNS or CF host is possible, but strongly discouraged). To achieve this, complete the following steps.

      Steps: 

      1. Login to the host locally, or remotely using SSH. For details on enabling SSH access, see Section 9.4, Local services on PNS.

      2. Install the Munin server and the Lighthttpd webserver packages. The webserver is required to display the resource graphs collected using Munin. Issue the following command: sudo apt install munin lighttpd

      3. Configure the Munin server and the webserver as needed for your environment. For details, see the documentation of Munin and Lighthttpd.

        Warning
        • By default access to the Munin graphs does not require authentication.

        • Configure Munin and Lighthttpd to use SSL-encryption, and disable unencrypted HTTP access on port 80. Use port 443, or a non-standard port instead.

      4. Login to your MS host using MC, and enable access to the TCP port you configured in the previous step on your MS host. For details, see Section 9.4, Local services on PNS.

      17.3. Procedure – Monitoring PNS with Nagios

      Purpose: 

      To monitor your Proxedo Network Security Suite hosts with Nagios, complete the following steps. Using Nagios, you can monitor the memory usage of your hosts, and the number of running processes and threads for your PNS instances, and the expiry of the product licenses and certificates (for details on certificate and license-monitoring, see Procedure 11.3.8.9, Monitoring licenses and certificates).

      Prerequisites: 

      • To monitor your Proxedo Network Security Suite hosts with Nagios, you must already have a central Nagios server installed. It is not possible to install the Nagios server on Proxedo Network Security Suite hosts.

      • Experience in administering Nagios is required.

      Steps: 

      1. Login to the host locally, or remotely using SSH. For details on enabling SSH access, see Section 9.4, Local services on PNS.

      2. Issue the following command to install the required packages: sudo apt install PNS-pro-nagios-plugins nagios-nrpe-server. The PNS-pro-nagios-plugins package installs three scripts; these are automatically configured to run as root, and are listed in the /etc/nagios/nrpe.d/PNS.cfg file.

      3. Login to your MS host using MC, and enable access to the TCP/5666 port. For details, see Section 9.4, Local services on PNS.

      4. Repeat this procedure for every Proxedo Network Security Suite host that you want to integrate into your monitoring system.

      5. Add the Proxedo Network Security Suite hosts to your central Nagios server, and create services for the hosts. For details, see the documentation of Nagios.

        Note

        Adjust the alerting limits set in the scripts as needed for your environment.

      Appendix A. Keyboard shortcuts in Management Console

      A.1. Function keys

      Ctrl-F1

      Display the tooltip of the selected item.

      Shift-F1

      Display the help of the selected item.

      F5

      Refresh the screen and the status of the displayed objects.

      F8

      Activate the splitter bar. Use the cursor keys to resize the selected panel.

      F9

      Hide or display the configuration tree.

      F10

      Activate the main menu.

      Shift-F10

      Display the local menu of the selected item.

      Tab, Shift-Tab

      Moves keyboard focus to the next or the previous item.

      Ctrl-Tab, Ctrl-Shift-Tab

      Moves keyboard focus to the next or the previous item if Tab has function in the window (for example, in the Text Editor component).

      A.2. Shortcuts

      Ctrl-S

      Commit (save) the configuration.

      Ctrl-Shift-E

      View the configuration stored in MS.

      Ctrl-E

      Compare the configuration stored in MS with the one running on the selected host.

      Ctrl-U

      Upload the configuration to MS.

      Ctrl-I

      Display the Control Service dialog of the selected component.

      Ctrl-X

      Move the selected object to the clipboard.

      Ctrl-C

      Copy the selected object to the clipboard.

      Ctrl-V

      Paste the object from the clipboard.

      Ctrl-F

      Find a keyword in the active list or tree.

      Ctrl-A

      Select all objects of the active list or tree.

      Shift-+

      Open all objects of the active list or tree.

      Shift--

      Close all objects of the active list or tree.

      Shift-+

      Open all objects of the active list or tree.

      Shift--

      Close all objects of the active list or tree.

      Ctrl-Q

      Close MC.

      Esc

      Close the active dialog.

      A.3. Access keys

      Every button, menu item, checkbox, and so on in MC has an access key — an underlined letter in the name of the object. Pressing Alt-<accesskey> activates the object. For example, it selects or unselects the checkbox, or activates the button. For example, Alt-R renames an interface on the Networking component.

      Appendix B. Further readings

      The following is a list of recommended readings concerning various parts of PNS administration.

      Note

      Note that URLs can change over time. The URLs of the online references were valid at the time of writing.

      B.1. PNS-related material

      B.3. Postfix documentation

      • Author's name. Title. Place of publication: publisher, year. The Postfix Home Page

      • Blum, Richard. Postfix. SAMS Publishing, 2001. ISBN: 0672321149

      • Dent, Kyle D. Postfix: The Definitive Guide. O'Reilly Associates, 2004. ISBN: 0596002122

      B.4. BIND Documentation

      B.5. NTP references

      B.6. SSH resources

      B.7. TCP/IP Networking

      • Stevens, W., and Wright, Gary. TCP/IP Illustrated: Volumes 1-3. Addison-Wesley, 2001. ISBN: 0201776316

      • Mann, Scott. Linux TCP/IP Network Administration. Prentice Hall, 2002. ISBN: 0130322202

      B.8. Netfilter/nftables

      B.9. General security-related resources

      • Garfinkel, Simson, et al. Practical UNIX and Internet Security, 3/E. O'Reilly Associates, 2003. ISBN: 0596003234

      B.10. syslog-ng references

      B.11. Python references

      B.13. Virtual Private Networks (VPN)

      • Wouters, Paul, and Bantoft, Ken. Openswan: Building and Integrating Virtual Private Networks. Packt Publishing, 2006. ISBN 1904811256.

      • Feilner, Markus. OpenVPN: Building and Integrating Virtual Private Networks, 2006. Packt Publishing. ISBN 190481185X.

      Appendix C. Proxedo Network Security Suite End-User License Agreement

      (c) BalaSys IT Security.

      C.1. 1. SUBJECT OF THE LICENSE CONTRACT

      1.1 This License Contract is entered into by and between BalaSys and Licensee and sets out the terms and conditions under which Licensee and/or Licensee's Authorized Subsidiaries may use the Proxedo Network Security Suite under this License Contract.

      C.2. 2. DEFINITIONS

      In this License Contract, the following words shall have the following meanings:

      2.1 BalaSys

      Company name: BalaSys IT Security.

      Registered office: H-1117 Budapest, Alíz Str. 4.

      Company registration number: 01-09-687127

      Tax number: HU11996468-2-43

      2.2. Words and expressions

      Annexed Software

      Any third party software that is a not a BalaSys Product contained in the install media of the BalaSys Product.

      Authorized Subsidiary

      Any subsidiary organization: (i) in which Licensee possesses more than fifty percent (50%) of the voting power and (ii) which is located within the Territory.

      BalaSys Product

      Any software, hardware or service licensed, sold, or provided by BalaSys including any installation, education, support and warranty services, with the exception of the Annexed Software.

      License Contract

      The present Proxedo Network Security Suite License Contract.

      Product Documentation

      Any documentation referring to the Proxedo Network Security Suite or any module thereof, with special regard to the reference guide, the administration guide, the product description, the installation guide, user guides and manuals.

      Protected Hosts

      Host computers located in the zones protected by Proxedo Network Security Suite, that means any computer bounded to network and capable to establish IP connections through the firewall.

      Protected Objects

      The entire Proxedo Network Security Suite including all of its modules, all the related Product Documentation; the source code, the structure of the databases, all registered information reflecting the structure of the Proxedo Network Security Suite and all the adaptation and copies of the Protected Objects that presently exist or that are to be developed in the future, or any product falling under the copyright of BalaSys.

      Proxedo Network Security Suite

      Application software BalaSys Product designed for securing computer networks as defined by the Product Description.

      Warranty Period

      The period of twelve (12) months from the date of delivery of the Proxedo Network Security Suite to Licensee.

      Territory

      The countries or areas specified above in respect of which Licensee shall be entitled to install and/or use Proxedo Network Security Suite.

      Take Over Protocol

      The document signed by the parties which contains

      a) identification data of Licensee;

      b) ordered options of Proxedo Network Security Suite, number of Protected Hosts and designation of licensed modules thereof;

      c) designation of the Territory;

      d) declaration of the parties on accepting the terms and conditions of this License Contract; and

      e) declaration of Licensee that is in receipt of the install media.

      C.3. 3. LICENSE GRANTS AND RESTRICTIONS

      3.1. For the Proxedo Network Security Suite licensed under this License Contract, BalaSys grants to Licensee a non-exclusive,

      non-transferable, perpetual license to use such BalaSys Product under the terms and conditions of this License Contract and the applicable Take Over Protocol.

      3.2. Licensee shall use the Proxedo Network Security Suite in the in the configuration and in the quantities specified in the Take Over Protocol within the Territory.

      3.3. On the install media all modules of the Proxedo Network Security Suite will be presented, however, Licensee shall not be entitled to use any module which was not licensed to it. Access rights to modules and IP connections are controlled by an "electronic key" accompanying the Proxedo Network Security Suite.

      3.4. Licensee shall be entitled to make one back-up copy of the install media containing the Proxedo Network Security Suite.

      3.5. Licensee shall make available the Protected Objects at its disposal solely to its own employees and those of the Authorized Subsidiaries.

      3.6. Licensee shall take all reasonable steps to protect BalaSys's rights with respect to the Protected Objects with special regard and care to protecting it from any unauthorized access.

      3.7. Licensee shall, in 5 working days, properly answer the queries of BalaSys referring to the actual usage conditions of the

      Proxedo Network Security Suite, that may differ or allegedly differs from the license conditions.

      3.8. Licensee shall not modify the Proxedo Network Security Suite in any way, with special regard to the functions inspecting the usage of the software. Licensee shall install the code permitting the usage of the Proxedo Network Security Suite according to the provisions defined for it by BalaSys. Licensee may not modify or cancel such codes. Configuration settings of the Proxedo Network Security Suite in accordance with the possibilities offered by the system shall not be construed as modification of the software.

      3.9. Licensee shall only be entitled to analize the structure of the BalaSys Products (decompilation or reverse- engineering) if concurrent operation with a software developed by a third party is necessary, and upon request to supply the information required for concurrent operation BalaSys does not provide such information within 60 days from the receipt of such a request. These user actions are limited to parts of the BalaSys Product which are necessary for concurrent operation.

      3.10. Any information obtained as a result of applying the previous Section

      (i) cannot be used for purposes other than concurrent operation with the BalaSys Product;

      (ii) cannot be disclosed to third parties unless it is necessary for concurrent operation with the BalaSys Product;

      (iii) cannot be used for the development, production or distribution of a different software which is similar to the BalaSys Product

      in its form of expression, or for any other act violating copyright.

      3.11. For any Annexed Software contained by the same install media as the BalaSys Product, the terms and conditions defined by its copyright owner shall be properly applied. BalaSys does not grant any license rights to any Annexed Software.

      3.12. Any usage of the Proxedo Network Security Suite exceeding the limits and restrictions defined in this License Contract shall qualify as material breach of the License Contract.

      3.13. The Number of Protected Hosts shall not exceed the amount defined in the Take Over Protocol.

      3.14. Licensee shall have the right to obtain and use content updates only if Licensee concludes a maintenance contract that includes such content updates, or if Licensee has otherwise separately acquired the right to obtain and use such content updates. This License Contract does not otherwise permit Licensee to obtain and use content updates.

      C.4.  4. SUBSIDIARIES

      4.1 Authorized Subsidiaries may also utilize the services of the Proxedo Network Security Suite under the terms and conditions of this License Contract. Any Authorized Subsidiary utilising any service of the Proxedo Network Security Suite will be deemed to have accepted the terms and conditions of this License Contract.

      C.5.  5. INTELLECTUAL PROPERTY RIGHTS

      5.1. Licensee agrees that BalaSys owns all rights, titles, and interests related to the Proxedo Network Security Suite and all of BalaSys's patents, trademarks, trade names, inventions, copyrights, know-how, and trade secrets relating to the design, manufacture, operation or service of the BalaSys Products.

      5.2. The use by Licensee of any of these intellectual property rights is authorized only for the purposes set forth herein, and upon termination of this License Contract for any reason, such authorization shall cease.

      5.3. The BalaSys Products are licensed only for internal business purposes in every case, under the condition that such license does not convey any license, expressly or by implication, to manufacture, duplicate or otherwise copy or reproduce any of the BalaSys Products.

      No other rights than expressly stated herein are granted to Licensee.

      5.4. Licensee will take appropriate steps with its Authorized Subsidiaries, as BalaSys may request, to inform them of and assure compliance with the restrictions contained in the License Contract.

      C.6.  6. TRADE MARKS

      6.1. BalaSys hereby grants to Licensee the non-exclusive right to use the trade marks of the BalaSys Products in the Territory in accordance with the terms and for the duration of this License Contract.

      6.2. BalaSys makes no representation or warranty as to the validity or enforceability of the trade marks, nor as to whether these infringe any intellectual property rights of third parties in the Territory.

      C.7. 7. NEGLIGENT INFRINGEMENT

      7.1. In case of negligent infringement of BalaSys's rights with respect to the Proxedo Network Security Suite, committed by violating the restrictions and limitations defined by this License Contract, Licensee shall pay liquidated damages to BalaSys. The amount of the liquidated damages shall be twice as much as the price of the BalaSys Product concerned, on BalaSys's current Price List.

      C.8. 8. INTELLECTUAL PROPERTY INDEMNIFICATION

      8.1. BalaSys shall pay all damages, costs and reasonable attorney's fees awarded against Licensee in connection with any claim brought against Licensee to the extent that such claim is based on a claim that Licensee's authorized use of the BalaSys Product infringes a patent, copyright, trademark or trade secret. Licensee shall notify BalaSys in writing of any such claim as soon as Licensee learns of it and shall cooperate fully with BalaSys in connection with the defense of that claim. BalaSys shall have sole control of that defense (including without limitation the right to settle the claim).

      8.2. If Licensee is prohibited from using any BalaSys Product due to an infringement claim, or if BalaSys believes that any BalaSys Product is likely to become the subject of an infringement claim, BalaSys shall at its sole option, either: (i) obtain the right for Licensee to continue to use such BalaSys Product, (ii) replace or modify the BalaSys Product so as to make such BalaSys Product non-infringing and substantially comparable in functionality or (iii) refund to Licensee the amount paid for such infringing BalaSys Product and provide a pro-rated refund of any unused, prepaid maintenance fees paid by Licensee, in exchange for Licensee's return of such BalaSys Product to BalaSys.

      8.3. Notwithstanding the above, BalaSys will have no liability for any infringement claim to the extent that it is based upon:

      (i) modification of the BalaSys Product other than by BalaSys,

      (ii) use of the BalaSys Product in combination with any product not specifically authorized by BalaSys to be combined with the BalaSys Product or

      (iii) use of the BalaSys Product in an unauthorized manner for which it was not designed.

      C.9. 9. LICENSE FEE

      9.1. The number of the Protected Hosts (including the server as one host), the configuration and the modules licensed shall serve as the calculation base of the license fee.

      9.2. Licensee acknowlegdes that payment of the license fees is a condition of lawful usage.

      9.3. License fees do not contain any installation or post charges.

      C.10. 10. WARRANTIES

      10.1. BalaSys warrants that during the Warranty Period, the optical media upon which the BalaSys Product is recorded will not be defective under normal use. BalaSys will replace any defective media returned to it, accompanied by a dated proof of purchase, within the Warranty Period at no charge to Licensee. Upon receipt of the allegedly defective BalaSys Product, BalaSys will at its option, deliver a replacement BalaSys Product or BalaSys's current equivalent to Licensee at no additional cost. BalaSys will bear the delivery charges to Licensee for the replacement Product.

      10.2. In case of installation by BalaSys, BalaSys warrants that during the Warranty Period, the Proxedo Network Security Suite, under normal use in the operating environment defined by BalaSys, and without unauthorized modification, will perform in substantial compliance with the Product Documentation accompanying the BalaSys Product, when used on that hardware for which it was installed, in compliance with the provisions of the user manuals and the recommendations of BalaSys. The date of the notification sent to BalaSys shall qualify as the date of the failure. Licensee shall do its best to mitigate the consequences of that failure. If, during the Warranty Period, the BalaSys Product fails to comply with this warranty, and such failure is reported by Licensee to BalaSys within the Warranty Period, BalaSys's sole obligation and liability for breach of this warranty is, at BalaSys's sole option, either:

      (i) to correct such failure,

      (ii) to replace the defective BalaSys Product or

      (iii) to refund the license fees paid by Licensee for the applicable BalaSys Product.

      C.11. 11. DISCLAIMER OF WARRANTIES

      11.1. EXCEPT AS SET OUT IN THIS LICENSE CONTRACT, BALASYS MAKES NO WARRANTIES OF ANY KIND WITH RESPECT TO THE Proxedo Network Security Suite. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, BALASYS EXCLUDES ANY OTHER WARRANTIES, INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES OF SATISFACTORY QUALITY, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT OF INTELLECTUAL PROPERTY RIGHTS.

      C.12. 12. LIMITATION OF LIABILITY

      12.1. SOME STATES AND COUNTRIES, INCLUDING MEMBER COUNTRIES OF THE EUROPEAN UNION, DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES AND, THEREFORE, THE FOLLOWING LIMITATION OR EXCLUSION MAY NOT APPLY TO THIS LICENSE CONTRACT IN THOSE STATES AND COUNTRIES. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW AND REGARDLESS OF WHETHER ANY REMEDY SET OUT IN THIS LICENSE CONTRACT FAILS OF ITS ESSENTIAL PURPOSE, IN NO EVENT SHALL BALASYS BE LIABLE TO LICENSEE FOR ANY SPECIAL, CONSEQUENTIAL, INDIRECT OR SIMILAR DAMAGES OR LOST PROFITS OR LOST DATA ARISING OUT OF THE USE OR INABILITY TO USE THE Proxedo Network Security Suite EVEN IF BALASYS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

      12.2. IN NO CASE SHALL BALASYS'S TOTAL LIABILITY UNDER THIS LICENSE CONTRACT EXCEED THE FEES PAID BY LICENSEE FOR THE Proxedo Network Security Suite LICENSED UNDER THIS LICENSE CONTRACT.

      C.13. 13.DURATION AND TERMINATION

      13.1. This License Contract shall come into effect on the date of signature of the Take Over Protocol by the duly authorized

      representatives of the parties.

      13.2. Licensee may terminate the License Contract at any time by written notice sent to BalaSys and by simultaneously destroying all copies of the Proxedo Network Security Suite licensed under this License Contract.

      13.3. BalaSys may terminate this License Contract with immediate effect by written notice to Licensee, if Licensee is in material or persistent breach of the License Contract and either that breach is incapable of remedy or Licensee shall have failed to remedy that breach within 30 days after receiving written notice requiring it to remedy that breach.

      C.14. 14. AMENDMENTS

      14.1. Save as expressly provided in this License Contract, no amendment or variation of this License Contract shall be effective unless in writing and signed by a duly authorised representative of the parties to it.

      C.15. 15. WAIVER

      15.1. The failure of a party to exercise or enforce any right under this License Contract shall not be deemed to be a waiver of that right nor operate to bar the exercise or enforcement of it at any time or times thereafter.

      C.16. 16. SEVERABILITY

      16.1. If any part of this License Contract becomes invalid, illegal or unenforceable, the parties shall in such an event negotiate in good faith in order to agree on the terms of a mutually satisfactory provision to be substituted for the invalid, illegal or unenforceable

      provision which as nearly as possible validly gives effect to their intentions as expressed in this License Contract.

      C.17. 17. NOTICES

      17.1. Any notice required to be given pursuant to this License Contract shall be in writing and shall be given by delivering the notice by hand, or by sending the same by prepaid first class post (airmail if to an address outside the country of posting) to the address of the relevant party set out in this License Contract or such other address as either party notifies to the other from time to time. Any notice given according to the above procedure shall be deemed to have been given at the time of delivery (if delivered by hand) and when received (if sent by post).

      C.18. 18. MISCELLANEOUS

      18.1. Headings are for convenience only and shall be ignored in interpreting this License Contract.

      18.2. This License Contract and the rights granted in this License Contract may not be assigned, sublicensed or otherwise transferred in whole or in part by Licensee without BalaSys's prior written consent. This consent shall not be unreasonably withheld or delayed.

      18.3. An independent third party auditor, reasonably acceptable to BalaSys and Licensee, may upon reasonable notice to Licensee and during normal business hours, but not more often than once each year, inspect Licensee's relevant records in order to confirm that usage of the Proxedo Network Security Suite complies with the terms and conditions of this License Contract. BalaSys shall bear the costs of such audit. All audits shall be subject to the reasonable safety and security policies and procedures of Licensee.

      18.4. This License Contract constitutes the entire agreement between the parties with regard to the subject matter hereof. Any modification of this License Contract must be in writing and signed by both parties.

      Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License

      THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED. BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.

      1. Definitions

        1. "Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.

        2. "Collection" means a collection of literary or artistic works, such as encyclopedias and anthologies, or performances, phonograms or broadcasts, or other works or subject matter other than works listed in Section 1(f) below, which, by reason of the selection and arrangement of their contents, constitute intellectual creations, in which the Work is included in its entirety in unmodified form along with one or more other contributions, each constituting separate and independent works in themselves, which together are assembled into a collective whole. A work that constitutes a Collection will not be considered an Adaptation (as defined above) for the purposes of this License.

        3. "Distribute" means to make available to the public the original and copies of the Work through sale or other transfer of ownership.

        4. "Licensor" means the individual, individuals, entity or entities that offer(s) the Work under the terms of this License.

        5. "Original Author" means, in the case of a literary or artistic work, the individual, individuals, entity or entities who created the Work or if no individual or entity can be identified, the publisher; and in addition (i) in the case of a performance the actors, singers, musicians, dancers, and other persons who act, sing, deliver, declaim, play in, interpret or otherwise perform literary or artistic works or expressions of folklore; (ii) in the case of a phonogram the producer being the person or legal entity who first fixes the sounds of a performance or other sounds; and, (iii) in the case of broadcasts, the organization that transmits the broadcast.

        6. "Work" means the literary and/or artistic work offered under the terms of this License including without limitation any production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression including digital form, such as a book, pamphlet and other writing; a lecture, address, sermon or other work of the same nature; a dramatic or dramatico-musical work; a choreographic work or entertainment in dumb show; a musical composition with or without words; a cinematographic work to which are assimilated works expressed by a process analogous to cinematography; a work of drawing, painting, architecture, sculpture, engraving or lithography; a photographic work to which are assimilated works expressed by a process analogous to photography; a work of applied art; an illustration, map, plan, sketch or three-dimensional work relative to geography, topography, architecture or science; a performance; a broadcast; a phonogram; a compilation of data to the extent it is protected as a copyrightable work; or a work performed by a variety or circus performer to the extent it is not otherwise considered a literary or artistic work.

        7. "You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation.

        8. "Publicly Perform" means to perform public recitations of the Work and to communicate to the public those public recitations, by any means or process, including by wire or wireless means or public digital performances; to make available to the public Works in such a way that members of the public may access these Works from a place and at a place individually chosen by them; to perform the Work to the public by any means or process and the communication to the public of the performances of the Work, including by public digital performance; to broadcast and rebroadcast the Work by any means including signs, sounds or images.

        9. "Reproduce" means to make copies of the Work by any means including without limitation by sound or visual recordings and the right of fixation and reproducing fixations of the Work, including storage of a protected performance or phonogram in digital form or other electronic medium.

      2. Fair Dealing Rights. Nothing in this License is intended to reduce, limit, or restrict any uses free from copyright or rights arising from limitations or exceptions that are provided for in connection with the copyright protection under copyright law or other applicable laws.

      3. License Grant. Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below:

        1. to Reproduce the Work, to incorporate the Work into one or more Collections, and to Reproduce the Work as incorporated in the Collections; and,

        2. to Distribute and Publicly Perform the Work including as incorporated in Collections.

        The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats, but otherwise you have no rights to make Adaptations. Subject to 8(f), all rights not expressly granted by Licensor are hereby reserved, including but not limited to the rights set forth in Section 4(d).

      4. Restrictions. The license granted in Section 3 above is expressly made subject to and limited by the following restrictions:

        1. You may Distribute or Publicly Perform the Work only under the terms of this License. You must include a copy of, or the Uniform Resource Identifier (URI) for, this License with every copy of the Work You Distribute or Publicly Perform. You may not offer or impose any terms on the Work that restrict the terms of this License or the ability of the recipient of the Work to exercise the rights granted to that recipient under the terms of the License. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties with every copy of the Work You Distribute or Publicly Perform. When You Distribute or Publicly Perform the Work, You may not impose any effective technological measures on the Work that restrict the ability of a recipient of the Work from You to exercise the rights granted to that recipient under the terms of the License. This Section 4(a) applies to the Work as incorporated in a Collection, but this does not require the Collection apart from the Work itself to be made subject to the terms of this License. If You create a Collection, upon notice from any Licensor You must, to the extent practicable, remove from the Collection any credit as required by Section 4(c), as requested.

        2. You may not exercise any of the rights granted to You in Section 3 above in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation. The exchange of the Work for other copyrighted works by means of digital file-sharing or otherwise shall not be considered to be intended for or directed toward commercial advantage or private monetary compensation, provided there is no payment of any monetary compensation in connection with the exchange of copyrighted works.

        3. If You Distribute, or Publicly Perform the Work or Collections, You must, unless a request has been made pursuant to Section 4(a), keep intact all copyright notices for the Work and provide, reasonable to the medium or means You are utilizing: (i) the name of the Original Author (or pseudonym, if applicable) if supplied, and/or if the Original Author and/or Licensor designate another party or parties (for example a sponsor institute, publishing entity, journal) for attribution ("Attribution Parties") in Licensor's copyright notice, terms of service or by other reasonable means, the name of such party or parties; (ii) the title of the Work if supplied; (iii) to the extent reasonably practicable, the URI, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work. The credit required by this Section 4(c) may be implemented in any reasonable manner; provided, however, that in the case of a Collection, at a minimum such credit will appear, if a credit for all contributing authors of Collection appears, then as part of these credits and in a manner at least as prominent as the credits for the other contributing authors. For the avoidance of doubt, You may only use the credit required by this Section for the purpose of attribution in the manner set out above and, by exercising Your rights under this License, You may not implicitly or explicitly assert or imply any connection with, sponsorship or endorsement by the Original Author, Licensor and/or Attribution Parties, as appropriate, of You or Your use of the Work, without the separate, express prior written permission of the Original Author, Licensor and/or Attribution Parties.

        4. For the avoidance of doubt:

          1. Non-waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme cannot be waived, the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License;

          2. Waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme can be waived, the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License if Your exercise of such rights is for a purpose or use which is otherwise than noncommercial as permitted under Section 4(b) and otherwise waives the right to collect royalties through any statutory or compulsory licensing scheme; and,

          3. Voluntary License Schemes. The Licensor reserves the right to collect royalties, whether individually or, in the event that the Licensor is a member of a collecting society that administers voluntary licensing schemes, via that society, from any exercise by You of the rights granted under this License that is for a purpose or use which is otherwise than noncommercial as permitted under Section 4(b).

        5. Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author's honor or reputation.

      5. Representations, Warranties and Disclaimer UNLESS OTHERWISE MUTUALLY AGREED BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.

      6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

      7. Termination

        1. This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Collections from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License.

        2. Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above.

      8. Miscellaneous

        1. Each time You Distribute or Publicly Perform the Work or a Collection, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License.

        2. If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.

        3. No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent.

        4. This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You.

        5. The rights granted under, and the subject matter referenced, in this License were drafted utilizing the terminology of the Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979), the Rome Convention of 1961, the WIPO Copyright Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996 and the Universal Copyright Convention (as revised on July 24, 1971). These rights and subject matter take effect in the relevant jurisdiction in which the License terms are sought to be enforced according to the corresponding provisions of the implementation of those treaty provisions in the applicable national law. If the standard suite of rights granted under applicable copyright law includes additional rights not granted under this License, such additional rights are deemed to be included in the License; this License is not intended to restrict the license of any rights under applicable law.