A next-generation firewall (NGFW) is a firewall that has all the features of a basic firewall plus some or all of the additional features discussed below.

It’s important to note that not all NGFW vendors offer all these features, and sometimes the features are called by different names. Some vendors require expensive additional licences for some of the features. And sometimes the features are delivered using a cloud service, not done inside the firewall.

For licensed subscription features, it’s particularly important to understand what happens when the licence expires. Does your fancy firewall stop working? Does it still work but not perform the specific function?

Geolocation

Geolocation is the ability to associate IP addresses with physical locations. Rather than specifying a bunch of IP address ranges that will change over time, you can specify a whole country.

I’ve often used geolocation to restrict access from countries where I know the company has no legitimate business (cough, North Korea). However, you could also use geolocation to create a special NAT rule that sends all your North American traffic to one web server and all your European traffic to a different one.

Because IP address allocations change rather frequently, geolocation requires intermittent updates to remain current.

IDS/IPS

Intrusion detection or prevention systems look at the contents of packets going through the firewall and try to spot things that look like attacks. In most cases, IDS/IPS devices use signatures to detect known attacks. They also look for generic types of attacks, which are less signature dependent.

Because new attacks appear constantly, IDS/IPS devices tend to become less useful over time unless their signatures are regularly updated. This typically requires a subscription service from the vendor.

Antivirus/anti-malware

As files are uploaded or downloaded, they pass through the firewall and it can do a basic examination. In most cases, this will be signature-based analysis, looking at checksums and scanning inside the file for patterns of bytes that have been seen in known malware in the past. This feature obviously requires that files aren’t encrypted and the firewall has a recently updated set of signatures.

In truth, simple malware scanning at the firewall isn’t terribly effective because it’s so easy to hide malware with encryption. You’ll usually have better anti-virus scanning on the destination computer.

Sandboxing

A better form of malware scanning is called a sandbox. This is essentially a virtual machine (VM) running a common target operating system such as Windows. The firewall intercepts the file download and sends it over to the sandbox VM where it’s “detonated,” meaning the VM tries to run the file as if it were the target computer. The sandbox then looks for common types of malicious behaviour such as connecting to command and control (C&C) networks.

Once the file has been analyzed, the VM is safely deleted and a new one is created.

In some cases, the sandbox is a separate physical box sitting at the network edge. In other cases, it’s a cloud service. It tends to be less effective to put a sandbox inside the firewall because the sandbox requires so many memory and CPU resources to run.

For many sandbox deployment models, the firewall holds onto the file and doesn’t deliver it to the end user until the cloud service has indicated it’s clean. This can result in a small delay.

A sandbox is much more effective than signature-based malware detection because it can catch new, previously unseen varieties of malware, and it works well against encrypted downloads. It’s still not perfect, though.

Malware writers are getting better at detecting when the code is unpacked in a sandbox. Also, sometimes malware is trying to target a particular vulnerability in a particular operating system. The defense only works if the sandbox happens to be running the target vulnerable operating system.

Web proxy and URL checking

Another useful feature often included in a next-gen firewall is either a URL checker or a full web proxy service.

A web proxy sits in the middle of an encrypted HTTPS session. To the web browsing computer, it pretends to be the web server. To the web server, it pretends to be the browser. In this way, the proxy can decrypt the HTTPS session in both directions and see exactly what’s going on, hopefully detecting any malicious activity.

A URL checking service doesn’t actually decrypt the session. It just pulls the web site information out and checks a large database of known bad or questionable sites to see if this particular one is OK.

Typically, both types of services will provide detection for a wide range of content types, not just malware. For example, they can be used to enforce appropriate-use policies against adult sites, gaming, video streaming, and so forth.

Both URL checkers and web proxies can be cloud services, or they can use separate physical boxes on-site. And there are operational models that don’t even include the firewall. For example, a common web proxy cloud service model involves the web browser communicating directly with the cloud service.

As more and more web traffic is encrypted, it’s becoming increasingly difficult to police web traffic without a decrypting web proxy. So it’s a very useful part of your security infrastructure. However, it doesn’t need to be part of the firewall.

The other way web proxies can be useful is in caching content for frequently accessed web sites. If several people access the same content on a common web site, the proxy can download and keep the content the first time it’s accessed. Then each subsequent viewer gets the cached version, which is faster because it’s local.

The cache also keeps that traffic off the Internet link, which reduces overall congestion. This only works if the proxy is physically inside your network, instead of a cloud service. And, because proxies tend to be fairly expensive, it’s really only cost-effective at larger sites.

Reverse proxy

A reverse proxy is similar to a proxy, except that instead of sitting in front of the web browser and protecting it against many web sites, it sits in front of the web server and protects it against many browsers. The reverse proxy holds the SSL certificates for the web server, and offloads the SSL work from that server.

One of the principal benefits of a reverse proxy is that it can sit in front of a relatively insecure web server and ensure attackers can’t hit it directly. A reverse proxy can also help mitigate certain types of denial of service attacks, a service model that’s particularly effective if the reverse proxy is a cloud service.

Web application firewall

A web application firewall (WAF) is a more sophisticated version of a reverse proxy. In truth, I haven’t seen really good WAF implementations built into next-gen firewalls, but there’s no reason it couldn’t be done.
A WAF enforces good HTTP and HTTPS behaviour. It’s usually implemented to decrypt the HTTPS packets and forward them to the web server as standard HTTP traffic. The WAF holds the SSL certificates for the web server. In this way, the WAF is able to fully inspect the contents of every packet.

A WAF typically look for things like attempted buffer overflow attacks on input fields, SQL injection attacks, cross-site scripting, and so forth. It also tries to detect any attempts to exploit known vulnerabilities in web server software.

Load balancing

Some next-gen firewalls include a load balancer feature. I’ve used this for both data center and remote branch implementations.

At the data center, it’s useful to be able to split the load across multiple web servers on your DMZ. Normally this would be done by a discrete load balancer appliance, but it makes a lot of sense to combine the reverse proxy and WAF functions with the load balancer since they’re complementary, particularly for HTTPS traffic. But if you have a lot of web traffic, it makes more sense to separate the firewall from these other functions. It’s also nice for management reasons to have distinct logical control points.

At a branch location, I’ve used load balancer features on firewalls to automatically redirect traffic to a secondary backup server at a different data center. Also, if I adjust the priorities differently for different branches, I can make half of them use one server and the other half use a different server. Then, if either of these servers fails, the remaining one automatically takes over for all branches.

Threat intelligence

Threat intelligence is dynamic information that gets downloaded from a cloud service at regular intervals. The dynamic information is then used to help detect and block malicious behaviour.

Unfortunately, exactly what kinds of information are included in a threat intelligence feed can vary wildly. In some cases, it includes IP addresses of things like known spammers and known command-and-control servers. In other cases, it might include specific indicators of compromise that can be used to help detect malware.

Behavior analysis

Behavior analysis means trying to spot malicious applications by the fact that they do something unexpected. For example, a web server shouldn’t be making outbound connections to unknown IP addresses.

The term is also used to describe how sandboxes detect malware based on suspicious actions like trying to modify the Windows registry or deliberately trying to write past the end of allocated memory.
Behavior analysis is an emerging trend in security, and it potentially requires a lot of CPU and memory resources to monitor application behaviour adequately.

Central management

There are two sides to firewall management, and some vendors separate these functions into separate central applications. The first is configuration management, including the ability to manage and push out policies across large numbers of devices. The second is security monitoring.

Configuration management is a tricky problem. If you have a lot of firewalls, chances are there will be important differences between their configurations based on local requirements. At a minimum, the differences will include basic network configuration like interfaces and IP addresses. But if there are externally accessible resources (servers) at different sites, you’ll probably also need to have different firewall rules on every device.

At the same time, you probably still want to have centrally coordinated policies for next-gen functions such as which IDS/IPS signatures you want to use and web proxy settings. You could manually create the policies on every firewall, but it’s much easier to create the policies centrally and push them out to all devices. Central management is particularly important when you have active/standby configurations involving two firewalls that must have identical configurations to work properly.

You could handle the central security monitoring on a piece of software from your firewall vendor, or using a separate Security Incident Event Management (SIEM) console. Or you could use both. I actually like to use both because the vendor software usually provides better detail on the specific security alarms and requires less work to give useful information.

Another really useful function of a central management console is coordinating automatic updating of new rules, including security intelligence feed data as well as new IDS/IPS signatures. Security is a highly dynamic field. New threats appear constantly, and it makes life a lot easier if you can download and push out new policies and rules.

And, if you can download and push out policies and rules, you can do the same with software/firmware updates to the firewall devices. Some next-gen firewalls don’t seem to create firmware updates very often, while others create new versions several times a year. Since critical security bugs do appear in firewalls, I like to stay on top of the release notes for these new releases. And when I have a lot of firewalls, I like to be able to automate these updates.