Quantcast
Channel: Cloudflare Blog
Viewing all 449 articles
Browse latest View live

Introducing CFSSL - CloudFlare's PKI toolkit

$
0
0

Today we’re proud to introduce CFSSL—our open source toolkit for everything TLS/SSL. CFSSL is used internally by CloudFlare for bundling TLS/SSL certificates chains, and for our internal Certificate Authority infrastructure. We use this tool for all our TLS certificates.

Creating a certificate bundle is a common pain point for website operators, and doing it right is important for website security AND speed (CloudFlare does both). Getting the correct bundle together is a black art, and can quickly become a debugging nightmare if it's not done correctly. We wrote CFSSL to make bundling easier. By picking the right chain of certificates, CFSSL solves the balancing act between performance, security, and compatibility.

Staying true to our promise to make the web fast and safe, we’re open sourcing CFSSL. We believe CFSSL will be useful for anyone building a site using HTTPS—from website owners to large software-as-a-service companies. CFSSL is written in Go and available on the CloudFlare Github account. It can be used as a web service with a JSON API, and as a handy command line tool.

CFSSL is the result of real-world expertise about how the TLS ecosystem on the Web works that you can gain by working at CloudFlare’s scale. These are hard-won lessons, applied in code. The rest of this blog post serves as an under-the-hood look at how and why CFSSL works, and how you can use it as a certificate bundler or as a lightweight CA.

As you can see in the above image, the key is owned by CloudFlare, Inc., and the issuer is GlobalSign (a well-known CA). GlobalSign has issued a certificate named “GlobalSign Extended Validation CA - SHA256 - G2”; this G2 certificate is signed by another certificate called “GlobalSign Root CA - R2”. Note that the root certificate does not have an issuer—it is signed by its own private key. In other words, the root vouches for itself.

The reason your browser trusts a root certificate is because browsers have a list of root certificates that they implicitly trust, and when a site is trusted it will show the lock icon to the left of the web address (see image below). Certain certificates are on the list typically because they have been around for some time, and they belong to certificate authorities that have gone through a rigorous security audit. GlobalSign is one of these trusted authorities; therefore, its root certificate is in the list of trusted root certificates for nearly every browser.

What Are SSL Certificates?

SSL certificates form the core of trust on the web by assuring the identity of websites. This trust is built by digitally binding a cryptographic key to an organization’s identity. An SSL certificate will bind domain names to server names, and company names to locations. This ensures that if you go to your bank’s web site, for example, you know for sure it is your bank, and you are not giving out your information to a phishing scam.

A certificate is a file that contains a public key which is bound to a record of its owner. The mechanism that makes this binding trustworthy is called the Public Key Infrastructure and public-key cryptography is the glue that makes this possible.

A certificate is associated with a private key that corresponds to the certificate's public key, which is stored separately. The certificate comes with a digital signature from a trusted third-party called a certificate authority or CA. Let's examine the cloudflare.com certificate.

The length of intermediate certificates in a chain can vary, but chains will always have one leaf and one root.

Unfortunately, there is a catch for the owner of the leaf certificate: presenting only the leaf certificate to the browser is usually not enough. The intermediate certificates are not always known to the browser, requiring the website to include them with the leaf certificate. The list of certificates that the browser needs to validate a certificate is called a certificate bundle. It should contain all the certificates in the chain up to the first certificate known to the browser. In the case of the CloudFlare website, this bundle contains both the cloudflare.com certificate and the GlobalSign G2 intermediate.

CFSSL was written to make certificate bundling easier.

How CFSSL Makes Certificate Bundling Easier.

If you are running a website (or perhaps some other TLS-based service) and need to install a certificate, CFSSL can create the certificate bundle for you. Start with the following command:

$ cfssl bundle -cert mycert.crt

This will output a JSON blob containing the chain of certificates along with relevant information extracted from that chain. Alternatively, you can run the CFSSL service that responds to requests with a JSON API:

$ cfssl serve

This command opens up an HTTP service on localhost that accepts requests. To bundle using this API, send a POST request to this service, http://localhost:8888/api/v1/cfssl/bundle, using a JSON request such as:

{
    "certificate": <PEM-encoded certificate>
}

CloudFlare’s SSL service will return a JSON response of the form:

{
    "result": {<certificate bundle JSON>},
    "success": true,
    "errors": [],
    "messages": [],
}
(Developers take note: this response format is a preview of our upcoming CloudFlare API rewrite; with this API, we can use CFSSL as a service for certificate bundling and more—stay tuned.)


If you upload your certificate to CloudFlare, this is what is used to create a certificate bundle for your website.

To create a certificate bundle with CFSSL, you need to know which certificates are trusted by the browsers you hope to display content to. In a controlled corporate environment, this is usually easy since every browser is set up with the same configuration; however, it becomes more difficult when creating a bundle for the web.

Different Certs for Different Folks.

Each browser has unique capabilities and configurations, and a certificate bundle that’s trusted in one browser might not be trusted in another; this can happen for several reasons:

  1. Different browsers trust different root certificates.

    Some browsers trust more root certificates than others. For example, the NSS root store used in Firefox trusts 143 certificates; however, Windows 8.1 trusts 391 certificates. So a bundle with a chain that expects the browser to trust a certificate exclusive to the Windows root store will appear as untrusted in Firefox.


  2. Older systems might have old root stores.

    Some browsers run on older systems that have not been updated to trust recently-created certificate authorities. For example, Android 2.2 and earlier don't trust the “GoDaddy Root Certificate Authority G2” because that root certificate was created after they were released.


  3. Older systems don't support modern cryptography.

    Users on Windows XP SP2 and earlier can't validate certificates signed by certain intermediates. This includes the “DigiCert SHA2 High Assurance Server CA” because the hash function SHA-2 used in this certificate was standardised after Windows XP was released.

In order to provide maximum compatibility between SSL chains and browsers, you often have to pick a different certificate chain than the one originally provided to you by your CA. This alternate chain might contain a different set of intermediates that are signed by a different root certificate. Alternate chains can be troublesome. They tend to contain a longer list of certificates than the default chain from the CA, and longer chains cause site connections to be slower. This lag is because the web server needs to send more certificates (i.e. more data) to the browser, and the browser has to spend time verifying more certificates on its end. Picking the right chain can be tricky.

CFSSL helps pick the right certificate chain, selecting for performance, security, and compatibility.

How to Pick the Best Chain.

The chain of trust works by following the keys used to sign certificates, and there can be multiple chains of trust for the same keys.

In this diagram, all of the intermediates have the same public key and represent the same identity (GlobalSign's intermediate signing certificate), and they are signed by the GlobalSign root key; however, the issuance date for each chain and signature type are different.

For some outdated browsers, the current GlobalSign root certificate is not trusted, so GlobalSign used an older CA (GlobalSign nv-sa) to sign their root certificate. This allows older browsers to trust certificates issued by GlobalSign.

Each of these are valid chains for some browsers, but each has drawbacks:

  • CloudFlare leaf → GlobalSign SHA2 Intermediate.

    This chain is trusted by any browser that trusts the GS root G2 and supports SHA2 (i.e., this chain would not be trusted by a browser on Windows XP SP2).


  • CloudFlare leaf → 2012 GlobalSign Intermediate → GS Root G2.

    This chain is trusted by any browser that trusts GS Root G2, but does not benefit from the stronger cryptographic properties of the SHA-2 hashing algorithm.


  • CloudFlare leaf → 2012 GlobalSign Intermediate → GS cross-signed root.

    This chain is trusted by any browser that trusts the GlobalSign nv-sa root, but uses the older (and weaker) GlobalSign nv-sa root certificate.

This last chain is the most common because it’s trusted by more browsers; however, it’s also the longest, and has weaker cryptography.

CFSSL can create either the most common or the optimal bundle, and if you need help deciding, the documentation that ships with CFSSL has tips on choosing.

If you decide to create the optimal bundle, there’s a chance it might not work in some browsers; however, CFSSL is configured to let you know specifically which browsers it will not work with. For example, it will warn the user about bundles that contains certificates signed with advanced hash functions such as SHA2; they can be problematic for certain operating systems like Windows XP SP2.

CFSSL at CloudFlare.

All paid CloudFlare accounts receive HTTPS service automatically; to make this happen, our engineers do a lot of work behind scenes.

CloudFlare must obtain a certificate for each site on the service, and we want these certificates to be valid on as many browsers as possible; getting a certificate that works in multiple browsers is a challenge, but CFSSL makes things easier.

To start, a key-pair is created for the customer through a call to CFSSL's genkey API with the required certificate information:

{
    "CN": "customer.com",
    "hosts": [
             "customer.com",
             "www.customer.com"
    ],
    "key": {
           "algo": "rsa",
           "size": 2048
    },
    "names": [
             {
                    "C": "US",
                    "L": "San Francisco",
                    "O": "Customer",
                    "OU": "Website",
                    "ST": "California"
             }
    ]
}

Next, the CFSSL service responds with a key and a Certificate Signature Request (CSR). The CSR is sent to the CA to verify a site’s identity. Once the CA has validated the CSR, and the identity of the site, they then return a certificate signed by one of their issuing intermediates.

Once we have a certificate for a site (whether created by our CA partner or uploaded by the customer), we send the certificate to CFSSL's bundler to create a certificate bundle for the customer.

By default, we pick the most common chain. For customers who prefer performance to compatibility, we’ll soon introduce an option to rebundle certificates for optimum chains.

CFSSL as a Certificate Authority.

CFSSL is not only a tool for bundling a certificate, it can also be used as a CA. This is possible because it covers the basic features of certificate creation including creating a private key, building a certificate signature request, and signing certificates.

You can create a brand new CA with CFSSL using the -initca option. As we saw above, this takes a JSON blob with the certificate request, and creates a new CA key and certificate:

$ cfssl gencert -initca ca_csr.json

This will return a CA certificate and CA key that is valid for signing. The CA is meant to function as an internal tool for creating certificates. CFSSL should be used behind a middle layer that processes incoming requests, and ensures they conform to policy; we use it behind the CloudFlare site as an internal service.

Here’s an example of signing a certificate with CFSSL on the command line:

$ cfssl sign -ca-key=test-key.pem -ca=test-cert.pem www.example.com example.csr

Alternatively, a POST request containing the CSR and hostname can be sent to the CFSSL service via the /api/v1/cfssl/sign endpoint. To run the service call the serve command as follows:

$ cfssl serve -address=localhost -port=8888 -ca-key=test-key.pem -ca=test-cert.pem

If you already have a CFSSL instance running (in this case on localhost, but it can be anywhere), you can automate certificate creation with the gencert command’s -remote option. For example, if CFSSL is running on localhost, running the following gives you a private key and a certificate signed by the CA:

$ cfssl gencert -remote="localhost:8888" www.example.com csr.json

At CloudFlare we use CFSSL heavily for Railgun and other internal services. Special thanks go to Kyle Isom, Zi Lin, Sebastien Pahl, and Dane Knecht, and the rest of the CloudFlare team for making this release possible.

Looking Ahead.

CloudFlare’s mission is to help build a better web, and we believe that improved certificate bundling and certificate authority tools are a step in the right direction. Encrypted sites create a safer, more private internet for everyone; by open-sourcing CFSSL, we’re making this process easier.

We have big plans for the CA part of this tool. Currently, we run CFSSL on secure locked-down machines, but plan to add stronger hardware security. Adding stronger hardware security involves integrating CFSSL with low-cost Trusted Platform Modules (TPMs). This ensures that private keys stay private, even in the event of a breach.

For additional information on CFSSL and our other open source projects, check out our Github page. We encourage users to file issues on Github as they find bugs.


CloudFlare Joins Three More Peering Exchanges in Australia

$
0
0

In the coming weeks, connectivity to CloudFlare in Australia is going to a new level. As part of CloudFlare’s ongoing upgrades program, we established connections to three new Internet exchanges: the Megaport Internet exchanges in Sydney, Brisbane, and Melbourne. These connections doubled the number of Australian Internet exchanges we reach and marked the first exchanges outside of Sydney that Cloudflare participates in.

What is Peering?

When two ISPs peer, they agree to exchange traffic directly between each other rather than sending it a third party. By doing this, both partners avoid congested paths between transit providers, and they avoid paying to ship traffic—it's win-win!

What peering exchanges mean for CloudFlare is that we can significantly increase our service performance to users on ISPs that peer with us. Take Australia for example, for users who are currently on ISPs peering at Megaport, instead of CloudFlare sending traffic to the transit providers of those ISPs, we can now route the traffic directly to them. The result is lower latency, and traffic taking paths that are often less congested.

Low latency is crucial for internet speed due to the nature of TCP, the fundamental protocol on which the internet is built. TCP operates in such a way that any packet loss from a congested transit link will significantly slow a connection, and, conversely, connections with reduced latency will hugely amplify performance for end users. Therefore, by moving traffic to less congested, more direct “pipes” on the internet, CloudFlare is creating a faster web.

The More the Merrier

CloudFlare understands the importance of peering, and our team has put considerable time and effort into finding peering partners as our network expands. We peer as much as possible, both by participating at internet exchanges, and by establishing direct interconnects with ISPs.

As I write, we’re in the process of deploying equipment to many new data centers around the globe, and extending our network to reach more peering exchanges. In addition to the twenty-eight plus internet exchanges where CloudFlare already peers, we will soon be participating at: Terremark São Paulo, PTT-SP (São Paulo Brazil), EspanIX, FranceIX, MIX, NetNod, JPNAP, and, of course, Megaport. We’re also constantly commissioning new private interconnects with a range of eyeball networks.

Peering = A Better Web

CloudFlare is building a better web, and part of that project includes reducing the distance packets travel between you and your ISP. As the months go by, our network is expanding around the globe, and more of our traffic is sent through peering partners. The result? Faster and more reliable content delivery to our users.

CloudFlare's commitment to finding the most direct path over the internet to deliver your traffic, and, as the reach of our network expands, you can expect that our service will only get better.



CloudFlare maintains an open peering policy. Our peering details can be found here. Please contact us if you are an ISP on any of the IXs we participate in and would like to peer.

Courage to change things

$
0
0

This was an internal email that I sent to the CloudFlare team about how we are not afraid to throw away old code. We thought it was worth sharing with a wider audience.

Date: Thu, 10 Jul 2014 10:24:21 +0100
Subject: Courage to change things
From: John Graham-Cumming
To: Everyone

Folks,

At the Q3 planning meeting I started by making some remarks about how much
code we are changing at CloudFlare. I understand that there were audio
problems and people may not have heard these clearly, so I'm just going to
reiterate them in writing.

One of the things that CloudFlare is being brave about is looking at old code
and deciding to rewrite it. Lots of companies live with legacy code and build
on it and it eventuallybecomes a maintenance nightmare and slows the company
down.

Over the last year we've made major strides in rewriting parts of our code
base so that they are faster, more maintainable, and easier to enhance. There
are many parts of the Q3 roadmap that include replacing old parts of our
stack. This is incredibly important as it enables us to be more agile and 
more stable in future.

We should feel good about this.

We're not just rewriting so that engineers have fun with new stuff, we're
making things better across the board. Many, many companies don't have the
courage to rewrite; even fewer have the courage to do things we've done like
write a brand new DNS server or replace our core request handling
functionality.

We should also not feel bad about this either: it's not a sign that we did
things wrong in the first place. We operate in a very fast moving environment.
That's the nature of a start-up. As we grow our requirements change (we have 
more sites on us, more traffic, more variety and more ideas) and so our code 
needs to.

So don't be surprised by items on the roadmap that talk about replacing code.
Like a well maintained aircraft we take CloudFlare's code in for regular
maintenance and change what's worn.

John.

Want to help out write new code and replace the old? We're hiring in San Francisco and London.

Listo! Medellin, Colombia: CloudFlare's 28th Data Center

$
0
0

“What’s that? CloudFlare’s 28th data center is in Medellin, Colombia!?”

With the World Cup at an end, so too is our latest round of data center expansion. Following deployments in Madrid, Milan and São Paulo, we are thrilled to announce our 28th data center in Medellin, Colombia. Most of Colombia’s 22 million Internet users are now mere milliseconds away from a CloudFlare data center.

A data center unlike the others

Our deployment in Medellin is launched in partnership with Internexa, operators of the largest terrestrial communications network (IP backbone) in Latin America. Internexa operates over 28,000 km of fibre crossing seven countries in the continent. Our partnership was formed over a shared vision to build a better Internet—in this case, by localizing access to content within the region. Today, it is estimated that as much as 80% of content accessed in Latin America comes from overseas. It is with great pride that, as of now, all 2 million sites using CloudFlare are available locally over Internexa’s IP backbone. Let’s just say we’ve taken a bite out of this percentage (and latency)!

Lots of bits in Medellin

If your Internet service provider (ISP) is not connected to Internexa, fear not. We are constantly at work to improve our connectivity, and we’ve only begun our expansion throughout Latin America.

CloudFlare es la berraquera

One of of our missions here at CloudFlare is to give any website owner the tools and tricks used by the Internet giants to increase the speed and security of their websites. With a few clicks of a button it is now possible to make your site fast in Colombia (and 27 other locations around the world), protect it from the largest DDoS attacks and ensure its availability 100% of the time. Que chévere!

Experimenting with mozjpeg 2.0

$
0
0

One of the services that CloudFlare provides to paying customers is called Polish. Polish automatically recompresses images cached by CloudFlare to ensure that they are as small as possible and can be delivered to web browsers as quickly as possible.

We've recently rolled out a new version of Polish that uses updated techniques (and was completely rewritten from a collection of programs into a single executable written in Go). As part of that rewrite we looked at the performance of the recently released mozjpeg 2.0 project for JPEG compression.

To get a sense of its performance (both in terms of compression and in terms of CPU usage) when compared to libjpeg-turbo I randomly selected 10,000 JPEG images (totaling 2,564,135,285 bytes for an average image size of about 256KB) cached by CloudFlare and recompressed them using the jpegtran program provided by libjpeg-turbo 1.3.1 and mozjpeg 2.0. The exact command used in both cases was:

jpegtran -outfile out.jpg -optimise -copy none in.jpg

Of the 10,000 images in cache, mozjpeg 2.0 failed to make 691 of them any smaller compared with 3,471 for libjpeg-turbo. So mozjpeg 2.0 was significantly better at recompressing images.

On average images were compressed by 3.0% using mozjpeg 2.0 (ignoring images that weren't compressed at all) and by 2.5% using libjpeg-turbo (again ignoring images that weren't compressed at all). This seems similar to Mozilla's reported 5% improvement compared to libjpeg-turbo.

So, mozjpeg 2.0 achieved better compression on this set of files and compressed many more of them (93.1% vs. 65.3%).

As example, here's an image, not from the sample set. Its original size was 1,984,669 bytes. When compressed with libjpeg-turbo it is 1,956,200 bytes (2.4% removed); when compressed with mozjpeg 2.0 it is 1,874,491 (5.6% removed). (The mozjpeg 2.0 version is 4.2% smaller than the libjpeg-turbo version).

Pic du Midi

The distribution of compression ratios seen using mozjpeg 2.0 is shown below.

Compression seen

This improved compression comes at a price. The run time for the complete compression (including where compression failed to create an improvement) was 273 seconds for libjpeg-turbo and 474 seconds for mozjpeg 2.0. So mozjpeg 2.0 took about 1.7x longer, but, of course, achieved better compression on more of the files.

Because we'd like to get the highest compression possible we've assigned an engineer internally to look at optimization of mozjpeg 2.0 (specifically for the Intel processors we use) and will contribute back improvements to the project.

We're investing quite heavily in optimization projects (such as improvements to gzip (code here) and LuaJIT, and things like a very fast Aho-Corasick implementation). If you're interested in low-level optimization for Intel processors, think about joining us.

PS After this blog post was published some folks pointed out that the best comparison would be when the -progessive flag is used. I went back and checked and I had in fact done that in the 10,000 file test and so the data there is correct. However, the command shown above is not. The actual command used was:

jpegtran -outfile out.jpg -optimise -progressive -copy none in.jpg

Also, the image shown above was generated using the incorrect command because I did it outside the 10,000 file test. That paragraph above should say:

As example, here's an image, not from the sample set. Its original size was
1,984,669 bytes. When compressed with libjpeg-turbo it is 1,885,090 bytes
(4% removed); when compressed with mozjpeg 2.0 it is 1,874,491 (5.6% removed). 
(The mozjpeg 2.0 version is 0.6% smaller than the libjpeg-turbo version).

CloudFlare Now Supports WebSockets

$
0
0

CC BY 2.0 from Brian Snelson

I'm pleased to announce that CloudFlare now supports WebSockets. The ability to protect and accelerate WebSockets has been one of our most requested features. As of today, CloudFlare is rolling out WebSocket support for any Enterprise customer, and a limited set of CloudFlare Business customers. Over the coming months, we expect to extend support to all Business and Pro customers.

We're rolling out WebSockets slowly because it presents a new set of challenges. The story below chronicles the challenges of supporting WebSockets, and what we’ve done to overcome them.

The Web Before WebSockets

Before diving into WebSockets, it's important to understand HTTP—the traditional protocol of the web. HTTP supports a number of different methods by which a request can be sent to a server. When you click on a traditional link you are sending a GET request to a web server. The web server receives the request, then sends a response.

When you submit a web form (such as when you're giving your username and password when logging into an account) you use another HTTP method called POST, but the interaction is functionally the same. Your browser (called the ‘client’) sends data to the web server which is waiting to receive it. The web server then sends a response back to your browser. Your browser accepts the response because it's waiting for it after having sent the original request.

Regardless of the HTTP method, the communication between your browser and the web server operates in this lockstep request then response fashion. Once the client's browser has sent the request, it can't be modified.

In order to get new content, the user had to refresh the full page. This was the state of the web until 1999 when the Outlook Web team, unhappy with poor user experience, introduced a custom extension to Internet Explorer called XMLHttpRequest (aka AJAX). From then on web applications could use JavaScript to trigger HTTP requests programmatically in the background without the need of full page refresh.

However, to make sure the page on the client's browser is up to date, the JavaScript needed to trigger the AJAX request every few seconds. This is like asking the web server all the time: is there anything new yet? is there anything new yet?... This works, but it's not particularly efficient.

Ideally, what you'd want is a persistent open connection between the browser and the server allowing them to exchange data in real-time, not just when data is requested.

Prior to WebSockets, there were a few attempts at creating a persistent open connection. These would effectively open an HTTP request, and hold it open for an extended period of time. There were various solutions referred by the name “Comet”. Although they generally worked, they were pretty much a hack with limited functionality and often imposed more overhead than necessary. What was needed was a new protocol supported by both browsers and servers.

Enter WebSockets

WebSockets were adopted as a standard web protocol in 2011. Today, they’re supported by all modern versions of major browsers. The WebSocket protocol is a distinct TCP-based protocol, however, it’s initiated by an HTTP request which is then "upgraded" to create a persistent connection between the browser and the server. A WebSocket connection is bidirectional: the server can send data to the browser without the browser having to explicitly ask for it. This makes things like multiplayer games, chat, and other services that require real-time exchange of information possible over a standard web protocol.

CloudFlare is built on a modified version of the NGINX web server, and NGINX began supporting WebSocket proxying beginning with version 1.3.13 (February 2013). As soon as NGINX proxying support was in place, we investigated how we could support WebSockets for our customers. The challenge was that WebSockets have a very different connection profile, and CloudFlare wasn't originally optimized for that profile.

CC BY-SA 2.0 by Fernando de Sousa

Connection Counts

CloudFlare sees a large number of traditional HTTP requests that generate relatively short-lived connections. And, traditionally, we aimed at optimizing our network to support these requests. WebSockets present new challenges because they require much longer lived connections than traditional web requests, and that required changes to our network stack.

A modern operating system can handle multiple concurrent connections to different network services so long as there's a way to distinguish these connections from each other. One way of making these distinctions is called a "tuple". In theory, there are five distinct elements that form a tuple that can differentiate concurrent connections: protocol (e.g., TCP or UDP), the source IP address, the source port, the destination IP address, and the destination port.

Since CloudFlare is a proxy, there are two connections that matter: connections from browsers to our network, and connections from our network back to our customers' origin web servers. Connections from browsers to our network have highly diverse source IPs so they don’t impose a concurrent connection bottleneck. On the other hand, even before we implemented WebSockets, we've seen constraints based on concurrent connections to our customers' origin servers.

Trouble with Tuples

Connections are distinguished using the five tuple elements, two connections can be told apart if any of the five different variables differ. However, in practice, the set is more limited. In the case of CloudFlare's connections to our customers' origins, the protocol for a connection, whether a WebSocket or HTTP, is always TCP. The destination port is also fixed to 80—if it's a non-encrypted connection, or 443—if it's an encrypted connection.

When CloudFlare first launched, all traffic to each origin server came from only one IP address per CloudFlare server. We found that caused problems with hosting providers' anti-abuse systems. They would see a very large number of requests from a single IP and block it.

Our solution to this problem was to spread the requests across multiple source IP addresses. We hash the IP address of the client in order to ensure the same browser will connect via the same IP address, since some older web applications use the connecting IP address as part of their session formula. Now we have at least 256 IPs for origin traffic in each data center. Each server will have a handful of these addresses to use for traffic to and from the origin.

While it seems like the number of possible connections would be nearly infinite given the five different variables in the tuple, practical realities limit the connection counts quickly. There's only one possibility for the protocol, destination port, and destination IP. For the source IP, we are limited by the number of IP addresses dedicated to connecting to the origin. That leaves the source port, which ends up being the biggest driver in the number of connections you can support per server.

CC BY-ND 2.0

Picking Ports

The number of available ports is defined as a 16-bit number. That allows a maximum of 65,536 theoretical ports, but, in practice, the number of ports available to act as a source port is more limited.

The list of ports that can be used as a source port is known as the Ephemeral Port Range. The standards organization in charge of such things, known as IANA, recommends that the operating system pick a source port between 49152 and 65535. If you follow IANA's recommendations for the Ephemeral Port Range, there are only 16,384 available source ports.

The ports in range 1 - 1023, know as "Well Known Ports", are specially reserved and excluded from the Ephemeral Port Range.

At CloudFlare, we have a good sense of what will be connecting across the IPs on our network so we're comfortable expanding our Ephemeral Port Range to span from 9024 through 65535, giving us 56,512 possible source ports. The maximum number of simultaneous outgoing connections to any given CloudFlare customers' origin from any given server on our network should be: 56,512 multiplied by the number of source IPs assigned to the server. You'd think that would be plenty of connections, but there's a catch.

Bind Before Connect

As I wrote above, in order to prevent too much traffic from coming from a single IP, we spread requests across multiple source IP addresses. We use a version of Linux in the Debian family. In Linux, in order to pin the outbound request to a particular IP you bind a socket to a particular source IP and source port (using the bind() function) then establish the connection (using the connect() function). For example, if you wanted to set the source IP to be 1.1.1.1 and the source port to 1234 and then open a connection to the web server at www.google.com, you'd use the following code:

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("1.1.1.1", 1234))
s.connect(("www.google.com", 80))

If you specify a source port of 0 when you call bind(), then you're instructing the operating system to randomly find an available port for you:

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Set source port to 0, instructing the OS to find an available port
s.bind(("1.1.1.1", 0))
s.connect(("www.google.com", 80))

That works great, however, Linux's bind function is conservative. Because it doesn't know what you're going to be using the port for, when a port is reserved the bind function holds it regardless of the protocol, destination IP, or destination port. In other words, if you bind this way you're only using two of the possible five variables in the connection tuple.

At CloudFlare, this limited the number of concurrent connections per server from 64k for every source IP globally, to 64k for every source IP for every destination host. This practically removes the limit of outgoing connections from CloudFlare.

In practice, with typical HTTP connections, the connection limits rarely had an impact. This was because HTTP connections are typically very short-lived so under normal circumstances no server would ever hit the limit. We would occasionally see the limits hit on some servers during large Layer 7 DDoS attacks. We knew, however, if we were going to support WebSockets having a limited pool of concurrent connections would create a problem.

CC BY 2.0

Safely Reusing Ports

Our solution was to instruct the operating system to be less conservative, and allow ports to be reused. You can do this when you set up a socket by setting the SO_REUSEADDR option. Here's the code:

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Specify that it's ok to reuse the same port even if it's been used before
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("1.1.1.1", 0))
s.connect(("www.google.com", 80))

This works fine in our case so long as two connections sharing the same source IP and source port are sending and receiving traffic from two different destination IPs. And, given the large number of destination IPs behind our network, conflicts are rare. If there's a conflict, the connect() function will return an error. The solution is to watch for the error and, when one occurs, retry the port selection until you find an available, unconflicted port. Here's a simplified version of the code we use:

for i in range(RETRIES):
    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
    s.bind(("192.168.1.21", 0))
    try:
        s.connect(("www.google.com", 80))
        break
    except socket.error, e:
        if e.errno != errno.EADDRNOTAVAIL:
            raise
else:
    raise Exception("Failed to find an unused source port")

We’re now running this in production, increasing the maximum number of concurrent connections per server. With that improvement we're comfortable starting to support WebSockets to our customers.

WebSocket Rollout

We're beginning the rollout of WebSockets starting with our Enterprise customers. Enterprise customers who are interested in enabling WebSockets should contact their account manager. We're also rolling this feature out in beta to a limited number of our Business customers. If you're a Business customer and you'd like to sign up for the beta, please open a ticket with our support team.

Over the next few months, as we get a better sense for the demand and load this feature puts on our systems, we plan to expand support for WebSockets to more customers including those at other tiers of CloudFlare's service.

Finally, if you're interested in this topic and want to dive deeper into the technical details, I encourage you to read Marek Majkowski's blog post on the subject. Marek is the engineer on our team who spearheaded CloudFlare's efforts to add support for WebSockets. He argued convincingly that WebSockets were part of the modern web, and it was critical that we find a way to protect and accelerate them. Like our efforts to lead the way for broad SSL support, SPDY, and IPv6, CloudFlare's support of WebSockets furthers our mission of helping build a better Internet.

Google Now Factoring HTTPS Support Into Ranking; CloudFlare On Track to Make it Free and Easy

$
0
0

As of today, there are only about 2 million websites that support HTTPS. That's a shamefully low number. Two things are about to happen that we at CloudFlare are hopeful will begin to change that and make everyone love locks (at least on the web!).

CC BY 2.0 by Gregg Tavares

Google Ranks Crypto

First, Google just announced that they will begin taking into account whether a site supports HTTPS connections in their ranking algorithm. This means that if you care about SEO then ensuring your site supports HTTPS should be a top priority. Kudos to Google to giving webmasters a big incentive to add SSL to their sites.

SSL All Things

Second, at CloudFlare we've cleared one of the last major technical hurdle before making SSL available for every one of our customers -- even free customers. One of the challenges we had was ensuring we still had the flexibility to move traffic to sites dynamically between the servers that make up our network. While we can do this easily when traffic is over an HTTP connection, when a connection uses HTTPS we need to ensure that the correct certificates are in place and loaded into memory before requests are processed by a server.

To accomplish this, we needed to redesign how certificates are loaded into a server's memory. Previously, we'd load certificates into memory before traffic was directed to a server. That creates challenges when dealing with millions of domains and when shifting traffic to help isolate or mitigate an attack.

Lazy Loading Certs

Last week we pushed new code that allows us to "lazy load" SSL certificates on demand. This means that a certificate only needs to be in a data center, not on a particular server, before HTTPS traffic needing the certificate is directed to that server. When a request is received, the server can now dynamically retrieve the correct certificate even if it hasn't been previously loaded into memory. This allows us to continue to shift traffic to manage our network even if we are managing SSL certificates for millions of domains.

We're on track to roll out SSL for all CloudFlare customers by mid-October. When we do, the number of sites that support HTTPS on the Internet will more than double. That they'll also rank a bit higher is pretty cool too.

In the meantime, if you want a quick way to boost your Google ranking, upgrading to any paid CloudFlare account will enable HTTPS by default. Even before we make it free, it's already the fastest, easiest way to get HTTPS support on any site.

Tinfoil Security vulnerability scanning now easy in CloudFlare Apps

$
0
0

We’re pleased to introduce a new CloudFlare App: Tinfoil Security. Tinfoil Security is a service designed to find possible web application vulnerabilities.

Security is central to CloudFlare's service. Our security features operate at the network level to identify and block malicious traffic from ever reaching your website or application. However, even with that protection in place, it’s still worth fixing problems at the application layer as well.

Tinfoil Security helps website owners learn about possible vulnerabilities in their applications by scanning for vulnerabilities, tests all access points, and providing step-by-step introductions on eliminating threats if found.

(Detail of an individual vulnerability report.)

Their developer-focused reports can be tied into continuous integration lifecycle with API hooks for kicking off new scans after changes are made.

Tinfoil offers several price points, including a free plan that checks for XSS (Cross-Site Scripting) concerns. The Tinfoil app is a quick and easy addition to your CloudFlare service. Take a look!


CloudFlare hiring Go programmers in London and San Francisco

$
0
0

Are you familiar with the Go programming language and looking for a job in San Francisco or London? Then think about applying to CloudFlare. We're looking for people with experience writing Go in both locations.

CC BY-SA 2.0 by Yuko Honda (cropped, resized)

CloudFlare uses Go extensively to build our service and we need to people to build and maintain those systems. We've written a complete DNS server in Go, our Railgun service is all Go and we're moving more and more systems to Go programs.

We've recently written about our open source Red October Go project for securing secrets, and open-sourced our CFSSL Go-based PKI package. Go is now making its way into our data pipeline and be used for processing huge amounts of data.

We even have a Go-specific section on our GitHub.

If you're interested in working in Go on a high-performance global network like CloudFlare, send us an email.

Not into Go? We're hiring for all sorts of other positions and technologies.

DIY Web Server: Raspberry Pi + CloudFlare

$
0
0

The Raspberry Pi was created with a simple mission in mind: change the way people interact with computers. This inexpensive, credit card-sized machine is encouraging people, especially kids, to start playing with computers, not on them.

When the first computers came out, basic programming skills were necessary. This was the age of the Amigas, BBC Micros, the Spectrum ZX, and Commodore 64s. The generation that grew up with these machines gained a fundamental understanding how how computers work.

Computers today are easy to use and require zero understanding of programming to operate. They’re also expensive, and wrapped in sleek cases. While aesthetically pleasing designs and user friendly interfaces make computers appealing and accessible to everyone, these advances create a barrier to understanding how computers work and what they are capable of doing. This isn’t necessarily a problem, but for those who really understand computers, it seems that our collective sense of the power of computing has been dulled.

Raspberry Pi marks the beginning of a conscious effort to return to computing fundamentals. Starting at about $25—case not included—it’s purposely designed to remove barriers to tinkering, reprograming, and, ultimately, to understanding how computers work. This return to fundamentals is rejuvenating the curiosity and creative spirit of computer hobbyists around the world.

The vibrant community of Raspberry Pi enthusiasts have found good use for an ARM processor, a GPU, a few ports, and an operating system (typically Linux-based) loaded onto an SD card. A quick Google search will return ideas for building your own Pi powered robots and helicopters, humidity and temperature sensors, voice activated coffee machines, and sophisticated home security systems—homemade cases encouraged.

Another slice of Pi?

Ok, maybe you don’t need a Lego wrapped, solar powered Raspberry Pi syncing your Christmas lights to techno music—most of us don’t. But there are a whole host of practical projects for Pi enthusiasts as well. One of our customers, Scott Schweitzer of Solarflare, turned his Pi into a web server.

It might seem crazy to host a website on an 8GB SD card since a small surge in traffic would knock the site offline. But with CloudFlare handing the load, speeding up the content, and keeping it secure, a Raspberry Pi is all Scott needed to host his sites. He started by testing one of his sites on his Pi-CloudFlare combo, and once he realized it worked, he called up his hosting provider to cancel his subscription. Now his Raspberry Pi sits in his closet using a trickle of energy, and CloudFlare provides the rest of the power.

Check out Scott’s project here.

Staying true to the Raspberry Pi mission, Scott provided resources for other members of the Pi community to use.

The Recipe:

Ingredients

  • 1 Raspberry Pi (the Model B + 512 MB is recommended)
  • 1 RJ45 Cat45 cable
  • 1 micro-USB power supply
  • 1 SD card

Directions:

  1. Follow these directions put together by ScottSchweitzer of Solarflare.

  2. Open port 80 on your home router and map it directly to your Raspberry Pi. Click here to learn more.

  3. Put CloudFlare in front of your Pi to protect it from vulnerabilities, and make your site fast and secure.

Thats it!

This is the simple way of doing it, but Scott took a more roundabout path. He originally set up his Pi as a security penetration test tool using Raspberry Pwn Release 0.2 which is Debian 7 based. You can see that build here. Scott then later added Apache2, but since he didn’t need PHP or MySQL he left those off.

Note: If you prefer Nginx, see these instructions for installing it on your Pi.

By the way, if you like hacking on Nginx running on ARM CPU’s CloudFlare is hiring.

CloudFlare’s mission is to make the powerful tools used by internet giants accessible to everyone. Scott Schweitzer’s Raspberry Pi web server project is a perfect example.

Thanks for sharing your story Scott!

The Relative Cost of Bandwidth Around the World

$
0
0

CC BY 2.0 by Kendrick Erickson

Over the last few months, there’s been increased attention on networks and how they interconnect. CloudFlare runs a large network that interconnects with many others around the world. From our vantage point, we have incredible visibility into global network operations. Given our unique situation, we thought it might be useful to explain how networks operate, and the relative costs of Internet connectivity in different parts of the world.

A Connected Network

The Internet is a vast network made up of a collection of smaller networks. The networks that make up the Internet are connected in two main ways. Networks can connect with each other directly, in which case they are said to be “peered”, or they can connect via an intermediary network known as a “transit provider”.

At the core of the Internet are a handful of very large transit providers that all peer with one another. This group of approximately twelve companies are known as Tier 1 network providers. Whether directly or indirectly, every ISP (Internet Service Provider) around the world connects with one of these Tier 1 providers. And, since the Tier 1 providers are all interconnected themselves, from any point on the network you should be able to reach any other point. That's what makes the Internet the Internet: it’s a huge group of networks that are all interconnected.

Paying to Connect

To be a part of the Internet, CloudFlare buys bandwidth, known as transit, from a number of different providers. The rate we pay for this bandwidth varies from region to region around the world. In some cases we buy from a Tier 1 provider. In other cases, we buy from regional transit providers that either peer with the networks we need to reach directly (bypassing any Tier 1), or interconnect themselves with other transit providers.

CloudFlare buys transit wholesale and on the basis of the capacity we use in any given month. Unlike some cloud services like Amazon Web Services (AWS) or traditional CDNs that bill for individual bits delivered across a network (called "stock"), we pay for a maximum utilization for a period of time (called "flow"). Typically, we pay based on the maximum number of megabits per second we use during a month on any given provider.

Traffic levels across CloudFlare's global network over the last 3 months. Each color represents one of our 28 data centers.

Most transit agreements bill the 95th percentile of utilization in any given month. That means you throw out approximately 36 not-necessarily-contiguous hours worth of peak utilization when calculating usage for the month. Legend has it that in its early days, Google used to take advantage of these contracts by using very little bandwidth for most of the month and then ship its indexes between data centers, a very high bandwidth operation, during one 24-hour period. A clever, if undoubtedly short-lived, strategy to avoid high bandwidth bills.

Another subtlety is that when you buy transit wholesale you typically only pay for traffic coming in (“ingress") or traffic going out (“egress”) of your network, not both. Generally you pay which ever one is greater.

CloudFlare is a caching proxy so egress (out) typically exceeds ingress (in), usually by around 4-5x. Our bandwidth bill is therefore calculated on egress so we don't pay for ingress. This is part of the reason we don't charge extra when a site on our network comes under a DDoS attack. An attack increases our ingress but, unless the attack is very large, our ingress traffic will still not exceed egress, and therefore doesn’t increase our bandwidth bill.

Peering

While we pay for transit, peering directly with other providers is typically free — with some notable exceptions recently highlighted by Netflix. In CloudFlare's case, unlike Netflix, at this time, all our peering is currently "settlement free," meaning we don't pay for it. Therefore, the more we peer the less we pay for bandwidth. Peering also typically increases performance by cutting out intermediaries that may add latency. In general, peering is a good thing.

The chart above shows how CloudFlare has increased the number of networks we peer with over the last three months (both over IPv4 and IPv6). Currently, we peer around 45% of our total traffic globally (depending on the time of day), across nearly 3,000 different peering sessions. The chart below shows the split between peering and transit and how it's improved over the last three months as we’ve added more peers.

North America

We don't disclose exactly what we pay for transit, but I can give you a relative sense of regional differences. To start, let's assume as a benchmark in North America you'd pay a blended average across all the transit providers of $10/Mbps (megabit per second per month). In reality, we pay less than that, but it can serve as a benchmark, and keep the numbers round as we compare regions. If you assume that benchmark, for every 1,000Mbps (1Gbps) you'd pay $10,000/month (again, acknowledge that’s higher than reality, it’s just an illustrative benchmark and keeps the numbers round, bear with me).

While that benchmark establishes the transit price, the effective price for bandwidth in the region is the blended price of transit ($10/Mbps) and peering ($0/Mbps). Every byte delivered over peering is a would-be transit byte that doesn't need to be paid for. While North America has some of the lowest transit pricing in the world, it also has below average rates of peering. The chart below shows the split between peering and transit in the region. While it's gotten better over the last three months, North America still lags behind every other region in the world in terms of peering..

While we peer nearly 40% of traffic globally, we only peer around 20-25% in North America. Assuming the price of transit is the benchmark $10/Mbps in North America without peering, with peering it is effectively $8/Mbps. Based only on bandwidth costs, that makes it the second least expensive region in the world to provide an Internet service like CloudFlare. So what's the least expensive?

Europe

Europe's transit pricing roughly mirrors North America's so, again, assume a benchmark of $10/Mbps. While transit is priced similarly to North America, in Europe there is a significantly higher rate of peering. CloudFlare peers 50-55% of traffic in the region, making the effective bandwidth price $5/Mbps. Because of the high rate of peering and the low transit costs, Europe is the least expensive region in the world for bandwidth.

The higher rate of peering is due in part to the organization of the region's “peering exchanges”. A peering exchange is a service where networks can pay a fee to join, and then easily exchange traffic between each other without having to run individual cables between each others' routers. Networks connect to a peering exchange, run a single cable, and then can connect to many other networks. Since using a port on a router has a cost (routers cost money, have a finite number of ports, and a port used for one network cannot be used for another), and since data centers typically charge a monthly fee for running a cable between two different customers (known as a "cross connect"), connecting to one service, using one port and one cable, and then being able to connect to many networks can be very cost effective.

The value of an exchange depends on the number of networks that are a part of it. The Amsterdam Internet Exchange (AMS-IX), Frankfurt Internet Exchange (DE-CIX), and the London Internet Exchange (LINX) are three of the largest exchanges in the world. (Note: these links point to PeeringDB.com which provides information on peering between networks. You'll need to use the username/password guest/guest in order to login.)

In Europe, and most other regions outside North America, these and other exchanges are generally run as non-profit collectives set up to benefit their member networks. In North America, while there are Internet exchanges, they are typically run by for-profit companies. The largest of these for-profit exchanges in North America are run by Equinix, a data center company, which uses exchanges in its facilities to increase the value of locating equipment there. Since they are run with a profit motive, pricing to join North American exchanges is typically higher than exchanges in the rest of the world.

CloudFlare is a member of many of Equinix's exchanges, but, overall, fewer networks connect with Equinix compared with Europe's exchanges (compare, for instance, Equinix Ashburn, which is their most popular exchange with about 400 networks connected, versus 1,200 networks connected to AMS-IX). In North America the combination of relatively cheap transit, and relatively expensive exchanges lowers the value of joining an exchange. With less networks joining exchanges, there are fewer opportunities for networks to easily peer. The corollary is that in Europe transit is also cheap but peering is very easy, making the effective price of bandwidth in the region the lowest in the world.

Asia

Asia’s peering rates are similar to Europe. Like in Europe, CloudFlare peers 50-55% of traffic in Asia. However, transit pricing is significantly more expensive. Compared with the benchmark of $10/Mbps in North America and Europe, Asia's transit pricing is approximately 7x as expensive ($70/Mbps, based on the benchmark). When peering is taken into account, however, the effective price of bandwidth in the region is $32/Mbps.

There are three primary reasons transit is so much more expensive in Asia. First, there is less competition, and a greater number of large monopoly providers. Second, the market for Internet services is less mature. And finally, if you look at a map of Asia you’ll see a lot of one thing: water. Running undersea cabling is more expensive than running fiber optic cable across land so transit pricing offsets the cost of the infrastructure to move bytes.

Latin America

Latin America is CloudFlare's newest region. When we opened our first data center in Valparaíso, Chile, we delivered 100 percent of our traffic over transit, which you can see from the graph above. To peer traffic in Latin America you need to either be in a "carrier neutral" data center — which means multiple network operators come together in a single building where they can directly plug into each other's routers — or you need to be able to reach an Internet exchange. Both are in short supply in much of Latin America.

The country with the most robust peering ecosystem is Brazil, which also happens to be the largest country and largest source of traffic in the region. You can see that as we brought our São Paulo, Brazil data center online about two months ago we increased our peering in the region significantly. We've also worked out special arrangements with ISPs in Latin America to set up facilities directly in their data centers and peer with their networks, which is what we did in Medellín, Colombia.

While today our peering ratio in Latin America is the best of anywhere in the world at approximately 60 percent, the region's transit pricing is 8x ($80/Mbps) the benchmark of North America and Europe. That means the effective bandwidth pricing in the region is $32/Mbps, or approximately the same as Asia.

Australia

Australia is the most expensive region in which we operate, but for an interesting reason. We peer with virtually every ISP in the region except one: Telstra. Telstra, which controls approximately 50% of the market, and was traditionally the monopoly telecom provider, charges some of the highest transit pricing in the world — 20x the benchmark ($200/Mbps). Given that we are able to peer approximately half of our traffic, the effective bandwidth benchmark price is $100/Mbps.

To give you some sense of how out-of-whack Australia is, at CloudFlare we pay about as much every month for bandwidth to serve all of Europe as we do to for Australia. That’s in spite of the fact that approximately 33x the number of people live in Europe (750 million) versus Australia (22 million).

If Australians wonder why Internet and many other services are more expensive in their country than anywhere else in the world they need only look to Telstra. What's interesting is that Telstra maintains their high pricing even if only delivering traffic inside the country. Given that Australia is one large land mass with relatively concentrated population centers, it's difficult to justify the pricing based on anything other than Telstra's market power. In regions like North America where there is increasing consolidation of networks, Australia's experience with Telstra provides a cautionary tale.

Conclusion

The chart above shows the relative cost of bandwidth assuming a benchmark transit cost of $10/Megabits per second (Mbps) per month (which we know is higher than actual pricing, it’s just a benchmark) in North America and Europe.

While we keep our pricing at CloudFlare straight forward, charging a flat rate regardless of where traffic is delivered around the world, actual bandwidth prices vary dramatically between regions. We’ll continue to work to decrease our transit pricing, and increasing our peering in order to offer the best possible service at the lowest possible price. In the meantime, if you’re an ISP who wants to offer better connectivity to the increasing portion of the Internet behind CloudFlare’s network, we have an open policy and are always happy to peer.

Go interfaces make test stubbing easy

$
0
0

Go's "object-orientation" approach is through interfaces. Interfaces provide a way of specifying the behavior expected of an object, but rather than saying what an object itself can do, they specify what's expected of an object. If any object meets the interface specification it can be used anywhere that interface is expected.

I was working on a new, small piece of software that does image compression for CloudFlare and found a nice use for interfaces when stubbing out a complex piece of code in the unit test suite. Central to this code is a collection of goroutines that run jobs. Jobs are provided from a priority queue and performed in priority order.

The jobs ask for images to be compressed in myriad ways and the actual package that does the work contained complex code for compressing JPEGs, GIFs and PNGs. It had its own unit tests that checked that the compression worked as expected.

But I wanted a way to test the part of the code that runs the jobs (and, itself, doesn't actually know what the jobs do). Because I only want to test if the jobs got run correctly (and not the compression) I don't want to have to create (and configure) the complex job type that gets used when the code really runs.

What I wanted was a DummyJob.

The Worker package actually runs jobs in a goroutine like this:

func (w *Worker) do(id int, ready chan int) {
    for {
        ready <- id

        j, ok := <-w.In
        if !ok {
            return
        }

        if err := j.Do(); err != nil {
            logger.Printf("Error performing job %v: %s", j, err)
        }
    }
}

do gets started as a goroutine passed a unique ID (the id parameter) and a channel called ready. Whenever do is able to perform work it sends a message containing its id down ready and then waits for a job on the worker w.In channel. Many such workers run concurrently and a separate goroutine pulls the IDs of workers that are ready for work from the ready channel and sends them work.

If you look at do above you'll see that the job (stored in j) is only required to offer a single method:

func (j *CompressionJob) Do() error

The worker's do just calls the job's Do function and checks for an error return. But the code originally had w.In defined like this:

w := &Worker{In: make(chan *job.CompressionJob)}

which would have required that the test suite for Worker know how to create a CompressionJob and make it runnable. Instead I defined a new interface like this:

type Job interface {
    Priority() int
    Do() error
}

The Priority method is used by the queueing mechanism to figure out the order in which jobs should be run. Then all I needed to do was change the creation of the Worker to

w := &Worker{In: make(chan job.Job)}

The w.In channel is no longer a channel of CompressionJobs, but of interfaces of type Job. This shows a really powerful aspect of Go: anything that meets the Job interface can be sent down that channel and only a tiny amount of code had to be changed to use an interface instead of the more 'concrete' type CompressionJob.

Then in the unit test suite for Worker I was able to create a DummyJob like this:

var Done bool

type DummyJob struct {
}

func (j DummyJob) Priority() int {
    return 1
}

func (j DummyJob) Do() error {
   Done = true
   return nil
}

It sets a Done flag when the Worker's do function actually runs the DummyJob. Since DummyJob meets the Job interface it can be sent down the w.In channel to a Worker for processing.

Creating that Job interface totally isolated the interface that the Worker needs to be able to run jobs and hides any of the other details greatly simplifying the unit test suite. Most interesting of all, no changes at all were needed to CompressionJob to achieve this.

SXSW Interactive 2015: Vote for CloudFlare’s Submissions

$
0
0

Has your Twitter feed been flooded with “vote for my SXSW panel” tweets? With so much buzz all over the place, we wanted to keep it simple and share all of the presentations and panels affiliated with CloudFlare, in one place. Check out CloudFlare's presentations and panels below. If our topics interest you, casting a vote will take just a few minutes!

How to vote:

  1. To sign up go to this link
  2. Enter your name & email address, then confirm your account
  3. Log in with your new account and go to the “PanelPicker”
  4. Click “search/vote” and search for your panel by title
  5. VOTE

Please note: Voting ends on September 6th!

PanelPicker voting counts for 30% of a sessions acceptance to SXSW. Our panels cover a variety of topics from a tell-all that reveals the real story behind the male/female co-founder dynamic to exploring ways to protect human rights online. There’s something for everyone so check them out and vote for your favorite! Every vote counts!

Help CloudFlare get to SXSW!

Presentations:

“Lean On” is the New “Lean In”
Matthew Prince, co-founder and CEO of CloudFlare will sit down with Michelle Zatlyn, co-founder and Head of User Experience at CloudFlare for a tell-all about founding a startup as a male/female team. A cross between “The Dating Game” and “21 Questions,” this humorous--yet raw--joint interview will cover how co-founders of opposite sexes struggle and succeed.

Speakers:

  • Michelle Zatlyn, CloudFlare
  • Matthew Prince, CloudFlare

Fighting Surveillance can be Good for Business
Matthew Prince, co-founder and CEO of CloudFlare and Chris Soghoian, Principal Technologist and a Senior Policy Analyst at American Civil Liberties Union (ACLU) will discuss how US tech companies have realized that privacy can be a competitive advantage. They will focus on how building secure products designed to resist surveillance and government coercion can be good for business.

Speakers:

  • Christopher Soghoian, American Civil Liberties Union (ACLU)
  • Matthew Prince, CloudFlare

How the Internet Works: A Primer
Join world-renowned British programmer and writer John Graham-Cumming for a technical primer on the journey through networks and packets of information. Learn about the Internet’s early years and how it has evolved today, including the bottlenecks and hiccups nobody predicted, and predictions for its future.

Speaker:

  • John Graham-Cumming, CloudFlare

Panels:

We Take It for Granted: Defending All Human Rights
When we log onto the Internet most of us take for granted the right we have to write and say whatever we want. The Internet was built for all of us, not just the powerful. Attendees of this session will hear from NGO’s, civil society groups, technology companies and victims of online censorship about the biggest threats to freedom of speech on the Internet today. They’ll hear what companies and individuals are doing to ensure a free and open Internet does not become a memory.

Speakers:

  • Kenneth Carter, CloudFlare
  • Christopher Soghoian, ACLU
  • Ebele Okobi, Yahoo!
  • Ronald Deibert, The Citizen Lab

A Must for Startups: 10 Tips About Law Enforcement
This panel serves as a practical guide for any startup founder or core team to become familiar with their individual law enforcement strategy. Attendees will hear from experts on every side of this issue to learn how to push back when it’s appropriate, how to preserve a company’s rights, and how to constructively work with law enforcement agencies.

Speakers:

  • Jamie Tomasello, CloudFlare
  • Nick Grossman, Union Square Ventures
  • Nicole Nearing, Kik Interactive, Inc.
  • Nate Cardozo, Electronic Frontier Foundation

Anyone Can Prevent Cyberwar-- Here's How
Governments, cyber militias, and others increasingly seek to exploit the digital security vulnerabilities of their adversaries to cause physical harm including imprisonment, disappearances, and even murder. Learn more from security researchers and victims alike about how these crimes have unfolded and what you can do to prevent them.

Speakers:

  • Ryan Lackey, CloudFlare
  • Bill Marczak, Bahrain Watch
  • Eva Galperin Electronic Frontier Foundation
  • Runa Sandvik Freedom of the Press Foundation

David Partners with Goliath: BD Hacks for Startups
This group of business development gurus will discuss tactics that will propel your company forward and attract the right partner. Learn from panelists who have been building partnerships for their entire careers--assembling some of the largest partnerships in the tech industry. They’ll cover the good, the bad, and the ugly of architecting large-scale partnerships.

Speakers:

  • Maria Karaivanova, CloudFlare
  • Bob Rosin, LinkedIn
  • Niall Wall, Box
  • Bernie Brenner, www.truecar.com

Don’t Stand So Close to Me: Engineering & Sales
This panel will focus on growing sales without killing your engineering culture. Sales leaders from the world’s most engineering-focused companies will discuss lessons learned from making both cultures strong--apart. Learn how to work better together and what fundamental shifts are needed to maximize your company’s potential in the marketplace.

Speakers:

  • Chris Merritt, CloudFlare
  • Scott Doughman, Yahoo!
  • Armando Mann, RelateIQ
  • Mitch Spolan, Chegg

Participate in the “Internet Slowdown” with One Click

$
0
0

Net Neutrality is an important issue for CloudFlare as well as for our more than 2 million customers, whose success depends on a vibrant, dynamic, and open Internet. An open Internet promotes innovation, removes barriers to entry, and provides a platform for free expression.

That's why we’re announcing a new app that lets you easily participate in the “Internet Slowdown” on September 10th, 2014.

Battleforthenet.com (a project of Demand Progress, Engine Advocacy, Fight for the Future, and Free Press) has organized a day of protest against the United States Federal Communications Commission (FCC) proposal that will allow Internet providers to charge companies additional fees to provide access to those companies’ content online. Those additional fees will allow Internet service providers to essentially choose which parts of the Internet you will get to access normally, and which parts may be slow or inaccessible.

As we’ve seen that bandwidth pricing is not reflective of the underlying fair market value when Internet service providers have monopolistic control, we can only fret that a similar situation will be presented by a lack of net neutrality.

The Battle for the Net pop-up (intentionally obtrusive) will simulate a loading screen that website users may see if the website is put into an Internet “slow lane.”

Battle for the Net pop-up

When web visitors see this pop-up, they will be prompted to contact Congress, the FCC, and the White House.

Organizations such as the Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), and others have joined this effort to express their concern regarding the impact that losing net neutrality can have on the freedom and openness of the Internet. For more information, please visit battleforthenet.com.

Installing the Battle for the Net App

You can add the app to your site, starting today. It is free. The banner will automatically appear on September 10, 2014 and only on that day. As with any CloudFlare App, installation is just one click — no coding necessary.

Above is a screenshot of what the modal looks like on an example website. The modal appears in front of the center of the site and will be permanently dismissed by a visitor by clicking the (X) in the upper right hand corner. The app makes the banner automatically appear on September 10, 2014. After September 10th, the app will be automatically removed by CloudFlare.

Protection against critical Windows vulnerability (CVE-2015-1635)

$
0
0

8.1 Crash

A few hours ago, more details surfaced about the MS15-034 vulnerability. Simple PoC code has been widely published that will hang a Windows web server if sent a request with an HTTP Range header containing large byte offsets.

We have rolled out a WAF rule that blocks these requests.

Customers on a paid plan and who have the WAF enabled are automatically protected against this problem. It is highly recommended that you upgrade your IIS and your Windows servers as soon as possible; in the meantime any requests coming into CloudFlare that try and exploit this DoS/RCE will be blocked.


Oceania Redundancy: Auckland and Melbourne data centers now online

$
0
0

The genesis of our 33rd and 34th data centers in Auckland and Melbourne started a short hop away in nearby Sydney. Prior to these deployments traffic from all of New Zealand and Australia's collective 23 million Internet users was routed through CloudFlare's Sydney data center. Even for those in faraway Perth, the time necessary to reach our Sydney PoP was a mere 55ms of round trip time (RTT). By comparison, the blink of an eye takes 300-400ms. In other words, latency wasn't exactly the pressing concern. The real concern was a failure scenario in our Sydney data center.

Fortunately, our entire architecture starts with an assumption: failure is going to happen. As a result, we plan for failure at every level and have designed a system to gracefully handle it when it occurs. Even though we now maintain multiple layers of redundancy—from power supplies and power circuits to line cards, routing engines and network providers—our ultimate level of redundancy is in the ability to fail out an entire data center in favor of another. In the past we've even written about how this might even play out in the case of a global thermonuclear war. In this instance, the challenge we set out to solve was not how to fail gracefully, but how to fail gracefully without materially increasing latency for the millions of applications that depend on our network in the Oceania region.

Grace and speed

Prior to our Auckland and Melbourne data centers, a failure in Sydney meant a shift in traffic to the West Coast of the US or Southeast Asia adding significant, and noticeable, latency to our users' applications (spoiler: it now fails over to Auckland and Melbourne with minimal latency!). But before we get to how the Kiwi's and Australia's "second city" saved the day, it is important to understand how the Internet "works" in Oceania. As we set out to create resiliency in-region, we considered several options:

Plan A: Second (redundant) data center in Sydney

At first blush a second facility in Sydney would seem to solve most imaginable failure scenarios (perhaps save a nuclear one). However, when it comes to the Internet, things are rarely intuitive. Australia, at least in the context of the Internet, is very Sydney-centric. The vast majority of traffic from Australia to the rest of the Internet passes through a single data center (which just so happens to be the same exact facility in which we are currently located). Even if we were to make a redundant deployment in a completely separate facility, traffic to that facility would still have to pass through the same potential point of failure: our current facility. Not to mention, a second facility in Sydney would neither reduce latency and improve performance for a larger subset of Internet users in the region nor localize our traffic any further than it already was. It also wouldn't have opened up any new peering opportunities which, as we've explained in a prior blog post, is of immense importance to the performance and overall health of our network.

Not enough redundancy. No performance gain from status quo.

Plan B: Add a data center in Auckland

Out of left pitch came Auckland. Although not an obvious choice, Auckland is rather uniquely situated to provide redundancy in-region as a result of how many operators have constructed their networks: by building or buying a 3 drop ring between New Zealand-Australia, Australia-USA, and USA-New Zealand.

Because traffic is only heavily utilized in one direction, towards New Zealand, this leaves a lot of free capacity between New Zealand-Australia (i.e. from New Zealand). After working with various providers, we've structured a solution that allows us to reduce latency and further localize traffic for Internet users in New Zealand while also allowing for full redundancy between both Auckland, Sydney and the rest of Oceania. Not to mention, CloudFlare is now a member of New Zealand's largest peering exchange, APE-IX.

Redundancy and performance gains versus the status quo.

But why stop there?

Plan C: Add a data center in Auckland AND Melbourne

Despite achieving the desired level of redundancy and performance gains in New Zealand through our own version of the Trans-Tasman arrangement, we figured that both Kiwi’s and Aussies would prefer not to have the others' redundancy deposited at their doorstep. So, along came Melbourne as a complement to Auckland. Our Melbourne data center offers the same benefits of content localization and performance improvement for Internet users south of the border, as well as domestic redundancy in the case of a data center failure.

Latency improvement and additional redundancy.

Problem solved, right? Almost...

The Auckland situation

The Auckland fiber situation is an interesting one. Auckland is situated around a harbour. Over this harbour is a bridge which most of the fiber in the city runs across, with a small amount running via a much longer path around the harbour (think 30km longer fiber runs). Purchasing fiber between the areas of the city separated by the harbour costs more than a Kim Dotcom political party (i.e. a lot of money).

The bulk of the country's Internet providers (particularly the smaller ones) exist only south of the harbour bridge. The cable landing stations and many of the data centers, on the other hand, only exist north of the harbour bridge. If you are as performance obsessed as we are, you want to be south of the bridge so that you can peer with all networks in as inexpensive, resilient and easy manner as possible. But for us, the vetting process didn't stop there. The specific site we selected is the core node for most major New Zealand transit providers, allows us to interconnect with nearly every provider from within the same facility, and hosts a core node of the local peering exchange.

Now that our Auckland DC is live, some users in New Zealand may notice that their ISPs continue to route to CloudFlare in Sydney. That makes no sense you say!? We agree! Despite our best efforts, it takes two to tango. Should this be the case with your ISP, let them know...hopefully that will spark a conversation.

Photo sources: Robin Ducker and Robert Michalski; images used under creative commons license.

Contributing back to the security community

$
0
0

This Friday at the RSA Conference in San Francisco, along with Marc Rogers, Principal Security Researcher at CloudFlare, I'm speaking about a version of The Grugq's PORTAL, an open source network security device designed to make life easier and safer for anyone traveling, especially internationally, with phones, tablets, laptops, and other network-connected devices.

Portal uses open-source software and services to take inexpensive, commodity travel routers and turn them into powerful security devices. Since this is pretty far from CloudFlare's core business, it warrants a brief digression into why we support projects like this.

Computer security was for a very long time only of interest to hobbyists, academics, and obscure government agencies. Cryptography was an interesting offshoot of number theory, a foundational but very abstract part of mathematics, and many of the early infrastructure components of the Internet didn't include security at all -- there was an assumption that anyone who could gain access would be responsible and well-intentioned, a consequence of the academic origins; after all, why would they want to break or steal things which were freely available.

Before the "cambrian explosion" of commercial computer security, there was still a lot of great security research -- it was just done by academics and by individuals in the "security community", who were motivated by a desire to understand how things worked, and to make tools because they loved the technology and wanted to solve their own problems. Some of the most interesting and powerful security tools available today trace their origins to rather humble open-source, hobbyist, or academic beginnings -- PGP, Tor, OTR, various forms of electronic cash, and many others. Many of today's most respected people in computer security entered the field during this period, out of personal curiosity or academic interest.

While CloudFlare is an eager participant in the commercial security world (we're the easiest and fastest way to set up TLS for any website, and we provide edge security and performance to millions of sites, including some of the largest sites on the Internet -- both with free service and paid services in various tiers), we are also very aware of the broad and deep foundation of security tools and research on which we're built.

CloudFlare makes extensive use of open source software, such as the Nginx web server, community collections of Web Application Firewall (WAF) rules originally generated by OWASP, and powerful cryptographic algorithms developed in academia and implemented by open source efforts such as the OpenSSL Project.

Where possible, CloudFlare also contributes back to the community in those areas. We contribute bugfixes and new functionality back to open source packages, and we employ developers who in their spare time make additional contributions to open source software. CloudFlare's GitHub Open Source page is a great collection of many of our contributions to open source.

One of our biggest contributions to date has been CFSSL, CloudFlare's PKI toolkit. We're constantly hearing from various projects and companies how CFSSL has been helpful to them -- one of the most exciting being the Let's Encrypt community Certificate Authority project. Nick Sullivan has written in the CloudFlare blog announcing CFSSL, and exciting things are continuing to happen with that software.

CloudFlare, like many other companies in computer security, makes other contributions to the security community. One of the most interesting is that we, like some other companies, values having employees participate in the security community in a variety of ways. Encouraging side projects independent of work -- research, finding new vulnerabilities and responsibly disclosing them, creating new tools, participating in conferences or working groups, running tutorials, and being active in standards bodies -- sometimes doesn't have a direct connection to the company's products, but contributes to a vibrant security ecosystem. There are often unforeseen benefits of these collaborations -- learning about new tools, finding great engineers -- we're actively hiring for a variety of roles -- and many others.

Marc and I are grateful to CloudFlare for the time to work on this open source tool and to present it to the world, and we're looking forward to presenting at RSA.

Of Phishing Attacks and WordPress 0days

$
0
0

Proxying around 5% of the Internet’s requests gives us an interesting vantage point from which to observe malicious behavior. However, it also makes us a target. Aside from the many and varied denial of service (DDoS) attacks that break against our defenses, we also see huge number of phishing campaigns. In this blog post I'll dissect a recent phishing attack that we detected and neutralized with the help of our friends at Bluehost.

One attack that is particularly interesting because it appears to be using a brand new WordPress 0day.

A Day Out Phishing

The first sign we typically see that indicate a new phishing campaign is underway are the phishing emails themselves. Generally, there's a constant background noise from a few of these emails targeting individual customers every day. However, when a larger campaign starts up that trickle typically turns into a flood of similar messages.

Here's an example we've recently received:
Example Phish Note — CloudFlare will never send you an email like this. If you see one like it, it is fake and should be reported to our abuse team by forwarding it to support@cloudflare.com.

In terms of the phishing campaign timeline, these emails aren’t the first event. Much like a spider looking to trap flies, a phisher first has to build a web to trap his or her victims. One way is through landing pages.

Looking like the legitimate login page of a target domain, these landing pages have one goal - to collect your credentials. Since these landing pages are quickly identified, the phisher will often go to great lengths to ensure that he or she can put up tens or even hundreds of pages during the lifetime of a campaign, all while being extra careful that these pages can't be traced back to him or her. Generally, this means compromising a large number of vulnerable websites in order to inject a phishing toolkit.

It's no surprise that first step in most phishing campaigns will usually be the mass compromise of a large number of vulnerable websites. This is why you will often see a notable uptick in the volume of phishing emails whenever a major vulnerability comes out for one of the popular CMS platforms. This is also why protecting the Internet’s back-office is a critical step in building a better Internet. If vulnerable CMS sites are protected, not only can they flourish, but the thousands of potential victims that could get abused when their infrastructure gets hijacked for malicious purposes are also protected.

This is why, at CloudFlare, we feel that providing free, basic security to every website is such important thing and why ultimately it could be such a game changer in building a better Internet.

Back to the phish

Returning to our phishing attack, we see that it's no different. Analyzing the “load.cloudflare.com” hyperlink on the message, we see that it's actually a link pointing to a compromised WordPress site hosted by Bluehost.

Note: This is not a reflection on Bluehost, every hosting provider gets targeted at some point. What's more important is how those hosting providers subsequently respond to reports of compromised sites. In fact, Bluehost should be commended for the speed with which they responded to our requests and the way they handled the affected sites we reported.

Every other email in this particular campaign followed the same pattern. Here is the source for another one of those links that uses “activate.cloudflare.com”:

As you can see, while the message will display that you are going to “activate.cloudflare.com”, in reality, anyone that clicks on the link will be diverted to the victim website. Which, unsurprisingly, is running an old, vulnerable version of WordPress.

Every phishing email from this campaign has followed exactly the same pattern: a basic email template addressed to $customer informing them that their site has been locked, and inviting them to click on a link that takes them to a compromised WordPress site on Bluehost.

It looks like is this attacker harvested a large number of target domains using public DNS and email records identifying administrative email addresses. This became the victim list. The attacker then targeted a convenient, vulnerable CMS platform and injected his or her phishing kit into every innocent domain that's been compromised. Finally, once that is complete the attacker will send out the phishing emails to the victim list.

As phishing attacks go, this one is remarkably unsophisticated. All a savvy user had to do to reveal the true nature of this link is a quick mouse over. As soon as you do mouse over, the link you will see -- “activate.cloudflare.com” -- does not match the true destination.

More advanced phishing techniques

A clever phisher could have used one of the many well known tricks to obfuscate the URL. Below are some of those techniques possible so you will know them if you see them.

  • Image Maps. Instead of using a traditional hyperlink as above, phishers have been known to put an image map in their emails. The image, of course, is of a link to a trusted site such as “www.safesite.com”. When an unsuspecting user clicks within the coordinates of the image map, they are diverted to the phishing site.

Here's an example of this technique taken from an old eBay phishing email:

In order to fool Bayesian filters looking for phishing spam like this, the phisher also added some legitimate sounding words in white font to keep them from appearing. The user experience, however, is the same as the earlier phishing email. As soon as you mouse over the image map, you will see the true destination.

  • Misspelled domain names and homoglyphs. Misspelled domains can look very similar to their legitimate counterparts and by using a homoglyph -- or look-a-like character -- an attacker can make a misspelling look even less obvious. Examples include “microsft.com” or “g00gle.com” These domains look so similar to the advertised link in the phishing email that many people will miss the discrepancy when they mouse over the link.

  • Reflection, Redirection, and javascript. Many websites -- even sites like answers.usa.gov -- have search features, offsite links, or vulnerable pages that have historically been abused by phishers. If the offsite link can be manipulated, typically with a cross site scripting vulnerability, it's possible for the phisher to present a link from the target domain that takes the victim to a page under the Phisher’s control. Below is an example of a historic flaw of this nature that existed on the answers.usa.gov site

    In this case, the URL looks like a legitimate “answers.usa.gov” ur,l but if you clicked on it, you would activate a cross-site scripting flaw that executes the javascript in your browser. The attacker could easily turn a page with this sort of flaw into a malicious credential harvester, all while continuing to use a link to the legitimate site.

    Note - All those extra %20’s are encoded spaces to push the javascript far enough away that it won’t be visible on mouse over.

    A slightly different flaw, also on the USA.gov site, involved its URL shortening service. Open to anyone, Phishers quickly discovered that they could use this service to create shortened URLs that looked important because of the .gov prefix. A victim that might be reluctant to click on an unsolicited bit.ly link might be less reluctant if faced with a .gov link. Here's an example of an email from a campaign abusing that service:

  • URL obfuscation. Historically, this has been one of the most popular and varied techniques. The concept is simple: use any of the available URL encoding methods to disguise the true nature of the destination URL. I'll describe a couple of historic techniques below.

    Note: many modern browsers now warn against some of these techniques.

    First is username:password@url abuse. This notation, now deprecated because only an idiot would pass credentials in the HTTP query string these days, was designed to allow seamless access to password protected areas. Abuse is easy, for example:

    www.safesite.com@www.evilsite.com

    Next is IP address obfuscation. You are probably familiar with the IP address as a dotted quad? 123.123.123.123 Well, IP addresses can also be expressed in a number of other formats which browsers will accept. By combining this with the “username:password@” trick above, an attacker can effectively hide his true destination. Below are four different methods for presenting one of Google’s IP addresses -- 74.125.131.105

    All of these URLs go to 74.125.131.150.

    Finally we have Punycode and Homoglyph based obfuscation. Punycode was created as a way for international characters to map to valid characters for DNS, e.g., “café.com”. Using punycode this would be represented as xn--caf-dma.com. As mentioned at the start, homoglyphs are symbols which closely resemble other symbols, like 0 and O, or I and l.

    By combining these two methods we can create URLs like:

    www.safesite.com⁄login.html.evilsite.com

    The secret to this obfuscated URL is to use a non-standard character which happens to be a homoglyph for /. The result? Instead of a page on safesite.com, you are actually taken to a subdomain of the following punycode domain:

    www.safesite.xn--comlogin-g03d.html.evilsite.com

    New obfuscation techniques like these appear all the time. Phishing is both the most common and arguably the most effective method of attack for medium to low skill attackers. Staying up to date with these techniques can be extremely useful when it comes to spotting potential phishing attempts.

Conclusions

After further analysis, it quickly became clear that all of the endpoints in this campaign were compromised WordPress sites running WordPress 4.0 - 4.1.1.

The most likely scenario is that a new critical vulnerability has surfaced in WordPress 4.1.1 and earlier. Given that 4.1.1 was, at the time of writing, the most current version of WordPress, this can only mean one thing -- a WordPress 0day in the wild.

Checking the WordPress site confirms that that a few hours ago they announced a new critical cross-site scripting vulnerability:

WordPress Security Notice

While we can’t confirm for certain that this is the vulnerability our phisher was using, it seems highly likely given the version numbers compromised.

Over the last few hours, we've worked closely with our friends at Bluehost to identify the remaining affected sites compromised by this phisher so they could take them offline. A quick response like this essentially renders all remaining phishing emails in this current campaign harmless. The need to quickly neutralize Phishing sites is why CloudFlare engineers developed our own process for rapidly identifying and tagging suspected compromised sites. When a site on our network is flagged as phishing site, we impose an interstitial page that serves to both to warn potential visitors and give the site owner time to fix the issue.

You can read more about our own process in this blog post.

How customers can stay safe

By enabling the CloudFlare's WAF, CloudFlare customers have some protection against the sort of cross-site scripting vulnerability involved in this attack. However, anyone can still fall victim to a phishing email. Below are 7 tips to help you stay safe:

  • NEVER click on links in unsolicited emails or advertisements.
  • Be vigilant, poor spelling and strange URLs are dead giveaways.
  • Mouse over the URL and see if it matches the what’s presented in the email.
  • Type URLs in manually where possible.
  • Stay up to date on your software and make sure you are running a current up-to-date antivirus client — yes, even if you're using Mac.
  • It’s possible to set traps for phishers: use unique, specific email addresses for each account you set up. That way if you get an email to you Bank of America email address asking for your Capital One password, you immediately know it's a phishing attack.
  • Finally, where possible enable two-factor authentication. While not foolproof, it makes it much harder for attackers.

New Magento WAF Rule – RCE Vulnerability Protection

$
0
0

Today the Magento Security Team created a new ModSecurity rule and added it to our WAF rules to mitigate an important RCE (remote code execution) vulnerability in the Magento web e-commerce platform. Any customer using the WAF needs to click the ON button next to the “CloudFlare Magento” Group in the WAF Settings to enable protection immediately.

CloudFlare Magento Rule

Both Magento version 1.9.1.0 CE and 1.14.1.0 EE are compromised by this vulnerability. CloudFlare WAF protection can help mitigate vulnerabilities like this, but it is vital that Magento users patch Magento immediately. Select and download the patch for SUPEE-5344.

CloudFlare's New Dashboard

$
0
0

When we started CloudFlare, we thought we were building a service to make websites faster and more secure, and we wanted to make the service as easy and accessible as possible. As a result, we built the CloudFlare interface to put basic functions front and center and designed it to look more like a consumer app than the UI for the powerful network it controlled.

Over time, we realized there was a lot more to CloudFlare. In 2011, we added the concept of Apps, and a myriad of additional performance and security features from Rocket Loader to Railgun were added too. All these additional settings got buried under a lowly gear menu next to each site in a customer's account.

alt

While still easier to navigate than the average enterprise app, using our UI could be a frustrating experience. For instance, imagine you wanted to turn on Rocket Loader for multiple sites. You'd have to go to My Websites, click the gear menu next to one of your domains, navigate to CloudFlare Settings, select the Performance Settings tab, scroll to Rocket Loader, then toggle it on. Then you had to go back to My Websites and repeat the process again for the next domain.

That's bad, but it gets worse if you have hundreds, or even thousands, of domains on your account. I have to confess that I never imagined when we started CloudFlare that people would use us to manage thousands of domains. But today, many of our customers do exactly that. Our old UI falls over for those users and they are forced to manage their accounts through our API.

Redesign

A little over a year ago, we started the process of redesigning our customer UI. We wanted to solve some of the most common critiques. But, beyond that, we wanted to build an interface that could scale to accommodate all the new things that CloudFlare can do today, and the things we're planning for the future.

alt

To begin, we got rid of the My Websites page. Instead, in the upper left of the interface after you login is a drop-down menu that allows you to pick your site. If you're editing your Firewall settings, for example, and you change the site, you'll see the Firewall settings for the new site. Beyond that, the new domain selector can handle however many domains you add to your account, and you can search by domain name or sort by the domain's setup status. So go ahead, add all of your domains–we can handle it.

Apps on an Operating System

Internally, we talk about how what we're building at CloudFlare is an operating system for the edge of the Internet. The core of that operating system has the ability to process packets, apply rules to those packets, and then process them in different ways.

What we realized is that what sits on top of that operating system are apps. It's not just third parties that create apps, which is how we thought about apps previously. Instead, when we’re building features, what we're building are apps, too: Caching is an app, Firewall is an app, Analytics is an app. The metaphor that made the most sense was that we were building the equivalent of a modern smartphone. Certain apps come pre-installed, and some are created by third parties that you can add if they make sense for your needs.

alt

That became the organizing metaphor for the new dashboard. At the top of every page is a dock of apps. Under these apps we've organized all of CloudFlare's features. We've organized these apps into logical categories to keep you from having to click through multiple tabs or scroll through pages of settings when you want to make a change. The ability to add both CloudFlare-developed apps, as well as third party apps gives us flexibility as we continue to build more functionality into CloudFlare's platform.

Responsive to the Future

In addition to flexibility on our side, we wanted to give you flexibility to manage CloudFlare from any device. More than 20% of traffic to our old dashboard was from mobile devices. The interface, however, wasn't designed for a tiny screen. If you've ever tried to interact with a complicated interface like our DNS manager by pinching and zooming you understand how it definitely wasn't the interface for the future.

alt

The new dashboard is fully responsive. That means you can now manage CloudFlare from your mobile phone or tablet and the dashboard will automatically adjust to whatever screen real estate you have. We think we’ve built the first full-featured DNS manager you can manage from your mobile device. Give it a try and let us know what you think.

Roll Out

We're rolling out the new dashboard by default to all our customers over the course of this week. For the next few weeks, you can roll back to the old interface if you can't find what you're looking for in the new dashboard. But this is the future we're committed to, so if there's something you don't like then make sure to give us feedback. To submit your comments, click the ‘Send Feedback’ link at bottom-right in the new control panel. In the meantime, if you have more questions about the new dashboard, check our FAQ.

alt

Over the next week, we'll be publishing posts about how we designed the new dashboard and some of the new features it enables. We're excited for the potential the new interface unlocks,and, someday, maybe we'll get around to redesigning our marketing pages as well.

Viewing all 449 articles
Browse latest View live


Latest Images