While it’s certainly important to have tools and systems in place to detect and mitigate cyber attacks, the best approach to protecting your business against digital threats will always be proactive. By assessing weaknesses in your physical and digital infrastructure, you can determine how your business might be targeted. And by acquiring that knowledge, you can prevent that from happening at all.
Mind you, simply performing a cyber risk assessment isn’t enough to guarantee you’re safe from a cyber attack. The biggest error any business can make is assuming they’re entirely impenetrable. There are always blind spots, and there will always be vulnerabilities.
Towards the end of last year, thousands of WordPress sites were discovered to be infected with a nasty bit of malware that included a keylogger and cryptocurrency miner. The malware relied on a server located at the fake cloudflare.solutions domain, which was quickly taken down, stopping it from sending data to the people behind the attack.
But, it appears the same malware is back, infecting WordPress sites and communicating its payload via various new domains. It should be understood that the domains the malware is using have nothing to do with the real Cloudflare. Since the proliferation of top-level domains over the last few years, it’s straightforward for an attacker to register a domain similar to the existing domain of a prominent company. On a cursory glance, an inexperienced site owner is likely to overlook code using domains that they associate with a legitimate business whose services they may use.
At the time of writing, it appears that several thousand WordPress sites have been infected, so, if you use our Virtual Private Server or Dedicated Server hosting platform to host a WordPress site, it’s worth taking a moment to make sure that it isn’t infected.
The malware has two roles: firstly, it logs keystrokes entered into form fields on a WordPress site and, secondly, it loads cryptomining code in the browsers of site visitors.
The keylogger is dangerous, particularly on WordPress sites that ask users to enter identifying or otherwise sensitive data. Ordinarily, that type of data is encrypted as it travels over the web so that an eavesdropper can’t intercept it. But, in this case, the malicious code is part of the site itself and can access the data directly.
The cryptomining scripts can cause problems for site visitors. Cryptocurrencies are created by a process called mining, which is essentially running lots of hard math on a computer’s CPU or GPU. Once enough processing work is done, the miner gets a coin.
Because lots of computer power is needed to generate even a small number of coins, one solution is to distribute the work among lots of low-power computers, which is exactly what the cryptomining malware does. The attacker gains cryptocurrency without having to invest in expensive hardware to do the work. Cryptomining malware consumes resources, including power, which is not something any site owner should inflict on their users, especially those using mobile devices.
It’s not clear how the malware infects WordPress sites in the first place, but the usual suspects are probably to blame: outdated WordPress sites with known vulnerabilities that haven’t been patched. Keep your WordPress sites up-to-date, folks!
If your site is infected, the most certain and effective way to remove malware is to reinstall WordPress and restore files and the database from a recent backup you’re sure in uninfected. If that’s not an option for you, Sucuri has an excellent guide to removing malware from a hacked WordPress site.
Earlier this month, it was reported that over 4,200 government and commercial sites in the US and UK were infected with cryptomining malware. The sites hadn’t been compromised by attacks on their servers or content management systems. Instead, Browsealoud, a utility used by all affected sites was targeted. Browsealoud is an accessibility tool that gives websites the ability to read content out loud.
At the time of writing, it isn’t clear how the malicious code found its way into Browsealoud, but once it was injected, every visitor to sites that used Browsealoud — it is to be found on many government and corporate sites — downloaded and executed the code.
In this case, the malicious code loaded Coinhive’s Monero miner, which uses the resources of site visitors’ machines to mine for the Monero cryptocurrency. Mining cryptocurrency at scale usually requires a large investment in high-power hardware. An alternative to buying expensive mining machines is to distribute the work among thousands of lower-powered machines, which is exactly what the Browsealoud attackers hoped to achieve.
Having your computer’s resources wasted to fill an attacker’s Monero wallet isn’t good, but the malware payload could have been much worse. The same technique can be used to inject malvertising, keyloggers, spyware, botnet software, and anything else the attackers deem useful.
Today’s web wouldn’t function without code from third-parties. The vast majority of the sites you visit pull in code from content distribution networks, analytics platforms, and a multitude of other sources. Popular projects like Browsealoud are prime targets for online criminals: compromise one project and you gain access to thousands or millions of users.
Unfortunately, it’s next to impossible for site owners to thoroughly vet every line of code their sites rely on. However, there are security precautions that will reduce the chance of a successful supply-chain attack. Subresource Integrity (SRI) is a security feature built into browsers that sites can use to check the authenticity of code they load from third-party sources.
To use SRI, site owners provide a cryptographic hash of the file to be fetched by the browser. The browser generates a hash of the fetched file and compares it to the hash provided by the site; if the hashes match, the content can’t have been tampered with. None of the sites affected by the Browsealoud attack used SRI.
“Today, a single person can perpetrate a multi-million dollar cybercrime with impunity. Activists, hacktivists, nation states, organized crime and rogue individuals are making careers as cyber thieves.”
I’m not a big fan of New Year’s Resolutions, especially when server security is involved. Security should be a constant concern for anyone doing business on the web. But, as a new year begins, it is a good time for server hosting clients to review the security policies and the systems they have in place. It’s empowering to start the new year confident that everything is working as it should.
2017 has not been a great year for online security. From ransomware to disastrous data thefts, stories that would once have been confined to tech news sites have played out across the popular media. Ransomware extracted hundreds of millions of dollars from the economy, money which now bloats the electronic wallets of crime organizations. I’d be surprised if anyone reading this post doesn’t at least know someone who was affected by the Equifax data leak.
WordPress security company Sucuri has brought attention to an increased incidence of WordPress sites compromised by malware installed as part of pirate or “nulled” premium themes. WordPress site owners who install pirate themes put their sites and their users at risk.
The wp-vcd malware creates an admin account with a known password. The malware’s creators use the admin account to install backdoors and further malware, which can be used to inject SEO spam or any other content or code that benefits the attackers. The use of pirate themes as a vehicle to infect WordPress sites with malware is not new, but it’s becoming increasingly common.
The holidays are right around the corner. It’s time to get your site ready and prepare for this festive season! From net neutrality, security breaches and McDonald’s accepting Bitcoin as a form of payment, technology is evolving and getting more complicated by the minute. Check out this roundup and enjoy the rest of November’s best content. If you enjoy this collection of the web’s top articles, feel free to follow us over on Facebook, Twitter, and Google+ for the same great content the rest of the year.
Cybercrime is now a billion-dollar industry. Ransomware attacks such as WannaCry and Petya are growing steadily more common, while social engineering methods like phishing are being used with alarming frequency to break through even the most ironclad security. So expansive, extensive, and varied are the different types of attacks one might fall victim to, it can seem nearly impossible to truly protect oneself.
Consider, for example, that the United States Energy Grid was recently accessed by an unauthorized party. One of our most valuable pieces of infrastructure could have potentially been shut down with just a few keystrokes. And it all happened because of a few unsecure email accounts.
Hackers are kind of like an electrical current – they’ll generally follow the path of least resistance. The less they have to do in order to victimize their targets and turn a tidy profit, the better. Small wonder, then, that ransomware has gained such popularity in the black hat community.
It’s pretty much the golden goose for cyber-crime. Instead of having to crack through several layers of security, they can simply send their code into the wild and wait. Eventually, it’ll find its way onto a corporate system, and administrators will pay them a mint to regain access to their data.
A distributed denial of service attack is the equivalent of driving a truck through the front of a store. It’s not complicated, technically complex, or difficult to pull off. At the same time, the amount of damage it can potentially do is huge – and defending against it is extremely difficult.
That problem is only compounded by the growth of the Internet of Things. New devices are flooding onto the market at a downright alarming rate. Never mind computers, smartphones, and routers. Now, we’re seeing smart televisions, refrigerators, thermostats, and even coffee makers.
Unless you’ve been holidaying on the moon, you’ll be aware of the unfortunate incident of Equifax and the security breach which caused a monumental leak of sensitive data impacting well over a hundred million Americans and countless others from around the world. The attackers accessed the data through a vulnerability in Apache Struts, a web framework used to develop Java EE web applications.
If you use Apache Struts, make sure that it has been updated to the most recent version.
Business leaders understand the value of data, so I’m not going to waste space urging them to back up or repeat the old saw that if data only exists in one place, it doesn’t really exist at all. But I am going to ask a related question: are you sure your server backups are functioning as they should and that your data really is replicated where you think it is?
Anyone who has worked as a system administrator or network engineer will understand where I’m coming from with that question. As business owners, online publishers, and web service providers, we often focus on the process of creating backups and designing disaster recovery plans. But far less attention is paid to regularly checking that those processes are working as intended.
See, hackers are kind of like electrical currents. They naturally follow the path of least resistance. And while the payoffs of hacking a major enterprise might be significant, such companies are anything but easy victims.
The solar eclipse has come and gone and we’re all still here. So onto the August edition of our monthly roundup! Let’s take a quick sneak peek at the last thirty days of news. Highlights include a .fish website that was actually a phishing site, the split of Bitcoin, and the emergence of Whoppercoin. Interest piqued? Feel free to jump into the roundup and enjoy the rest of August’s best content. If you enjoy this collection of the web’s top articles, feel free to follow us over on Facebook, Twitter, and Google+ for the same great content the rest of the year.
I don’t think I’ve met a single person who was a fan of RSA Tokens. They’re cumbersome to use, and easily lost. They seem like they need to be reset at least once a week, and inevitably end up taking a massive chunk out of a security budget that honestly can’t handle the expense.
Worst of all, they’re incredibly outdated – everyone these days already carries a smartphone to work. Nobody wants to be saddled with another device. Especially not something so clunky and aggravating.
You can thank Apple and Google for this, honestly. The advent of the smartphone essentially deep-sixed the old paradigm of enterprise IT. Suddenly, users had completely unfettered access to resources and tools which might traditionally be managed and deployed exclusively by security professionals.
Over the last couple of months, I’ve been interested in that strange zone between known good practice and reality. Watching successive waves of ransomware sweep across Europe and the US, I wonder why so many companies are falling prey to attacks that are, in theory, easy to defend against. I’m aware that there are good reasons not to update and that large organizations move slowly, but measures can be put in place to protect systems we know to be vulnerable.
One of the nicest qualities of WordPress is that it’s so easy to install. Once the WordPress application is uploaded to a server, it takes a few minutes to enter the necessary information and you’re done. But that ease-of-installation can be turned against WordPress users and server administrators. In what’s been called the WPSetup attack, attackers are searching for incomplete WordPress installations and using them to take over sites and servers.
SSL certificates used to be expensive and complex to install. Most website owners didn’t think the upside was worth the effort and so the vast majority of sites were served without encryption. Let’s Encrypt, first introduced in 2015, changed all that. With Let’s Encrypt, certificates are free and it’s easy to install and use them on common server configurations. This June, Let’s Encrypt celebrated its hundred millionth certificate. Its certificates are now used on over 47 million domains.
One of the most striking limitations of Let’s Encrypt is its inability to issue wildcard certificates, but that’s not going to be a problem for much longer. The project has announced that from January 2018, with the introduction of the ACME v2 API endpoint, Let’s Encrypt will be able to issue wildcard certificates.
A critical vulnerability has been discovered in Unix-like operating systems, including Linux and various flavors of BSD. The vulnerability can be used for local privilege escalation, allowing a local user or an attacker who has managed to compromise a server to gain root access.
All major Linux distributions have released patches to mitigate the risk, including CentOS. Server hosting clients should update the Linux kernel and GLibc immediately.
In a recent vote of PHP’s core developers, it was decided to integrate libsodium, a modern cryptographic library offering a comprehensive set of high-level security APIs for developers of PHP application.
PHP is the language of much of the web. Many of its most popular applications and frameworks are written in PHP, including WordPress, which accounts for 27% of all sites on the web. PHP runs on millions of shared hosting accounts, virtual private servers, and dedicated servers. And yet, the built-in cryptography options for the world’s most popular server-side language aren’t all that impressive.
The battle for a secure web has been long and hard-fought, but over the last couple of years, we’ve been edging closer to the — probably unattainable — ideal of SSL Everywhere. The number of sites delivering content over secure encrypted connections has soared, largely because buying and installing SSL certificates is no longer onerous. For most sites, a domain validated certificate is adequate. DV certificates are available for free from certificate authorities like Let’s Encrypt, which also provides a tool to install and verify certificates on common server configurations.
ImageMagick, a near-ubiquitous image processing library, has once again been discovered to harbor a serious vulnerability with the potential to leak sensitive data to an attacker. The vulnerability was patched a couple of months ago, but full details only became available this month. Although the vulnerability is quite difficult to exploit in most scenarios, it’s advisable for all users of ImageMagick and applications that depend on ImageMagick to update to the most recent version. The best way to mitigate the risk is to update to your Linux distribution’s most recent release, as most distributions, including CentOS, have applied the patch.
Online criminals target download servers because users trust them and developers neglect them. Once a download server is up and running, it needs little maintenance so regular updates and security checks are easily overlooked.
In a recent tale of download server woe, the well-regarded open source transcoding application Handbrake was attacked. One of its download servers was compromised and the application’s binary infected with the Proton Remote Access Tool (RAT). RATs are a common type of malware used to spy on users by logging their keystrokes, controlling their webcams, and reading their files. Handbrake’s secondary download server was compromised — the project has a couple of mirrored servers — so anyone who installed Handbrake before the malware was discovered had a 50% chance of being infected.
Whether you need some coding work done or some hardware installed, there’s a good chance you might eventually need to bring in some outside talent. This is especially true if you’re running a smaller organization in which there may be significant skill gaps – one of your largest hurdles to growth. With that in mind, you need to be careful who you bring in.
Whether you’re seeking extra privacy online or looking to securely work from your mobile phone, VPNs are an attractive option for protecting your digital data. At least….they are in theory. Here’s the thing, though – no two VPNs are created equal, and you need to be careful about which one you decide to use.
Because if you use the wrong one, you might as well not be using anything at all.
The Drupal team has issued a security advisory revealing a critical vulnerability in the References module. References is currently unmaintained and it’s unlikely the vulnerability will be fixed. Drupal users who depend on References should find an alternative as soon as possible.
Drupal is a popular content management system. It’s estimated that Drupal powers about 2% of the web, which is impressive for a CMS that isn’t WordPress. Drupal is a richly featured PHP content management system that can be used to build sites ranging from blogs to enterprise sites with complex content management requirements.
Laravel, which bills itself as the “PHP Framework For Web Artisans,” has announced that version 5.5 will require PHP 7. Laravel 5.5 is scheduled for release in July, 2017. If you’re using an older version of PHP for your applications, now’s the time to start thinking about updating.
PHP 7 was first released in December 2015 and brought with it a host of improvements. Most importantly, PHP 7 offers considerably better performance than earlier versions of the language. It’s not unusual for PHP-based projects to see latency improvements of around 50% when they upgrade to PHP 7. As a side effect of improved performance, upgrading to PHP 7 can also significantly reduce server load, allowing for a more efficient use of infrastructure resources.
Stop me if this story sounds familiar: you’re running penetration tests on your systems, and you discover several vulnerabilities. Unfortunately, when you report these vulnerabilities to your superiors, they aren’t seen as ‘priority’ fixes. Or maybe you’re on the other end of the yardstick – maybe you’re the one saying those bugs aren’t really an issue at the moment.
They’re so obscure, who could possibly think to exploit them?
If you turn to Google to help you solve your tech problems, how much do you really know about your field?
That’s probably a question that most of you have entertained at one point or another. As the old saying goes, the more you know, the more you realize what you don’t know. And with the largest repository of information in human history at your fingertips, you’ve probably already realized – there’s a lot that you don’t know.
A critical remote code execution vulnerability has been patched in the popular Java web application framework Apache Struts. The vulnerability is being actively exploited in the wild. Organizations using Apache Struts to build Java web applications should update to the patched version immediately to mitigate the risk of exploitation.
Vulnerable versions of Apache Struts include 2.3.5 to 2.3.21 and 2.5 to 2.5.10. Updating to the newest versions of Apache Struts will remove the vulnerability.
The fact is that you need to take measures to protect your business from both targeted attacks and malicious software. You need a game plan. Because you don’t want to be running about like a chicken with its head cut off in the event that your organization does end up a target.
The claim that more data is always a good thing has been repeated so often it has become a cliche. But data is only valuable insofar as it can be used to further the interests of the business. Otherwise, at best it is a wasted opportunity and at worst presents a legal and security risk.
The volume of data modern companies gather is enormous. It streams in from business operations, employee and customer activity, email, social media, and numerous other channels.
SonicWall gave us some good news and some bad news in a recent report on the cybersecurity landscape in 2016. The good news: malware attacks are down slightly. The bad news, as anyone who manages websites or works in IT knows: ransomware is up massively and DDoS attacks leveraging the IoT are the year’s highlight.
The drop in malware attacks is positive, but it’s unlikely to make much difference to site owners or hosting providers. The biggest decline was in the retail industry, driven by the introduction of chip-and-pin cards. If there will ever be a time for server admins to take it easy, it’s not now.
It’s that time of year again. Everyone’s looking back on the year that just passed, while wondering on what the new year has to come. While I do believe it’s important to reflect, that’s not what we’re here to do today.
Fail2Ban is a security application that can block malicious connection attempts.
Connecting a server to the Internet exposes it to all manner of dangers. A server is a neat package of network connectivity, compute resources, and storage space. And that’s before we consider the applications running on the server, which can provide a rich stream of users and their associated data.
Towards the end of 2016, WordPress 4.7 “Vaughn” was released. Over the next couple of weeks, point releases were made available, fixing the bugs that accompany with any new software. But WordPress 4.7.2 also fixed a critical security vulnerability that allowed attackers to edit or publish posts on a WordPress site without authentication. Unfortunately, large numbers of WordPress sites have not been updated and remain vulnerable to the attack.
Good morning, ladies and gentlemen. Do you know where your data is right now? Maybe you should.
Security firm Kryptowire recently made a troubling discovery about a large number of prepaid ZTE and Huawei phones. Turns out, these devices contained an unwelcome addition – a security backdoor that sent its users data, including text messages, to servers in China every 72 hours. Yikes, right?
Over the last decade, Google has made increasing efforts to avoid sending users to sites that present security problems. It’s common for site owners to find out about a security problem in an email from Google – or even worse, emails from users asking why their browsers are displaying a prominent warning that the site isn’t safe.
When sites fall foul of Google’s various security policies – Malware, Unwanted Software, Phishing, and Social Engineering – the company will warn users of the danger. Site owners can, once they’ve removed the problem, request a reassessment and have the warnings removed.
If you’re worth your salt as an administrator, you’ve already got network-level security in place. You’ve a decent authentication process in place, and you’ve firewalls and monitoring tools to guard against intrusion. That’s all well and good – but what about your internal and external communications?
Three factors combine to make WordPress particularly problematic where security is concerned. It’s hugely popular. It has a large plugin ecosystem. Its users tend to be non-technical.
Because of its popularity, it’s an obvious target for online criminals: if they find a vulnerability in WordPress, they have the key to millions of website. Plugin developers are of mixed quality and ability, and even the most diligent plugin developer can make a mistake — the thousands of plugins in the plugin repository receive less scrutiny than WordPress Core and are more likely to contain undiscovered vulnerabilities.
Historically, web browsers have shown users when their connection to a site is secure. When connections might be thought secure by users, but could expose data to third-parties, as with mixed content, browsers have displayed more prominent warnings.
Insecure sites — those with no SSL / TLS protection— displayed no warning or notification. That’s changing. Some versions of Google’s Chrome browser now display a warning on “insecure” sites, subtly but significantly changing the way that users perceive the security or otherwise of a web site.
A critical vulnerability in the Roundcube webmail application could allow an attacker to install and execute arbitrary code. Users of versions of Roundcube prior to 1.2.3 should update immediately to remove the risk. All versions from 1.0 to 1.2.2 are vulnerable. The vulnerability was patched immediately upon its disclosure, and the patched version is available from the Roundcube site and Linux distribution repositories.
There’s some new ransomware in the wild, and it’s some of the nastiest yet – it even puts the legendary Cryptolocker to shame.
For the uninitiated, ransomware is a relatively new (and fast-growing) breed of malware with a rather unusual twist. While most malicious programs are designed to steal files, provide hackers with a backdoor into corporate systems, or simply destroy everything they touch, ransomware uses one of the most powerful tools in a security professional’s arsenal – encryption – against them. It locks down access to a user’s entire system until they’ve paid a ransom, which ranges anywhere from a few bucks to tens of thousands of dollars.
A serious privilege escalation vulnerability was recently (re)discovered in the Linux kernel. The vulnerability could have allowed attackers to gain write access to read-only memory mappings and modify on-disk binaries, bypassing the usual mechanisms that prevent ordinary users modifying system files. There’s some evidence that the vulnerability is being actively used in the wild.
Following the release of a kernel patch, all major Linux distributions have released new patched kernels that close the vulnerability. CentOS — which Future Hosting uses on most of its hosting plans — has been patched. We’ve updated our hosting plans so that our managed hosting clients are no longer vulnerable.
On today’s web, you’d think it uncontroversial to encrypt connections between web browsers and servers. We live our lives on the web. Every day, we send financial and other sensitive data to servers and services we trust — at least nominally. Without an HTTPS-encrypted connection, which requires an SSL certificate, that data is sent in the clear for anyone to read.
But we all come across sites that don’t offer HTTPS connections. Many are content publishing sites — blogs, news sites, image sites — and their owners simply don’t see the upside of offering HTTPS. I think that’s a mistake, and I’m not alone. Most security experts recognize the benefits of HTTPS for almost every site, as do organizations like Google, which advocates for HTTPS Everywhere and uses HTTPS as a positive ranking signal.
Over the last few years, ransomware developers have been almost entirely focused on extorting money out of Windows desktop users. A combination of non-technical users and a less robust permissions system makes them easy pickings. But there’s no doubt that servers are a juicy target for online criminals. Imagine how you would feel if your business’ website was forced offline because its files or database were encrypted. What would you pay to get them back?
As things stand, there’s not much of a risk this will happen. The developers of the most prominent Linux-targeting ransomware — Linux.encoder — have proven singularly incompetent. Every time they release a new version, it’s quickly cracked by security researchers. But enterprise servers are too tempting a target to be safe for long.
It won’t be news to readers of this blog that Distributed Denial Of Service attacks are a growing problem. This July, a European media company was the victim of an attack that peaked at 363 Gbps. The volume of the attack is par-for-the-course these days, but it is interesting to note that the attackers used several vectors to amplify the attack, including DNSSEC.
For those who aren’t familiar, here’s how a typical reflected amplified DDoS attack works. Even the most well-equipped of attackers don’t have access to the amount of bandwidth we commonly see deployed in DDoS attacks. To achieve such huge volumes of data, they need to amplify their bandwidth. There are many ways to do this, but a typical approach is to use open DNS servers.
On July 15, it was revealed that the Ubuntu forum — a primary support channel for the popular Linux distribution — had been compromised, with over 2 million emails, usernames, and IP addresses in the hands of hackers. Passwords were never at risk because the forum used Ubuntu Single Sign-On, and the password hashes are not stored in the forum software.
We make Ubuntu available as an option on our unmanaged virtual private servers, so we think clients should be made aware of the attack.
As an administrator, we know you’re only human. Like anyone else, you make mistakes. Thing is, if you don’t catch and address those mistakes in time, you could be left with egg on your face…and sensitive data on the web.
Not really a situation you want to find yourself in.
You need to make sure your security is ironclad (or at least as near as you can get to that). That’s why today, we’re going to go over a few rather serious (yet curiously common) server security blunders. If you recognize any of these flubs as something you’ve done yourself, maybe it’s time to start rethinking your approach to security.
For a law firm like Mossack Fonseca, which built its entire reputation on secrecy, it was the worst-case scenario. 11.5 million documents, and more than two terabytes of data, blowing the secrets of its clients wide-open. Scores of celebrities, world leaders, and businesses with offshore accounts having their dirty laundry aired in the wind.
It’d like to preface today’s piece with something of a story. You’ve all watched heist movies, I’m certain – films like Ocean’s Eleven, The Italian Job, and The Usual Suspects. What’s one thing they all have in common?
A man on the inside.
See, the truth is that no matter what measures you take to protect your organization – no matter how much you harden your network devices and strengthen your encryption – there will always be a weak link. If a criminal cannot gain access to your business by targeting a security vulnerability, they’ll target your people, instead. Eventually, they’ll come across someone who lets them in.
News of data leaks involving sensitive user information, including usernames and passwords, makes media headlines with alarming frequency. Authentication details are valuable information for cybercriminals, who can use them to gain further information and for identity theft.
If your web application stores user data, it’s at risk of having that information stolen. Of course, it’s best to have security precautions in place to ensure that data doesn’t leak, or that it’s useless even if it does, but what should a business do once it believes that its username and password database has been made publicly available?
Back in March, a security professional found out something rather disturbing about Git, one of the leading version control systems on the web: a pair of vulnerabilities that left untouched could bring down entire servers.
You may be wondering why that’s so distressing. After all, vulnerabilities are nothing new, right? No piece of software can be considered truly secure.
The fact that these flaws exist isn’t the troubling part. It’s the fact that they’ve existed in the application, unpatched, for several years. And they’re present in multiple branches, including 2.x, 1.9, and 1.7.
All versions of bbPress prior to 2.5.9 are vulnerable. Users of older versions of bbPress should update immediately. Because the vulnerability was publicly disclosed following the release of a patch, malicious third-parties are aware of it, and the chances are high that bbPress sites will come under attack.
The X.509 SSL certificate system is vital to secure communication on the internet. SSL certificates, issued by certificate authorities, are responsible for ensuring we know who we are connected to when we send private information. They’re also responsible for the encryption that prevents snoopers from seeing the data we send over network connections.
For SSL certificates to do their job, certificate authorities must only issue certificates to organizations whose identities have been validated. If certificate authorities issued certificates without validating the identity of the applicant, we’d have no way of knowing whose servers we are connected to. There are, however, multiple levels of validation, which range from almost no validation to a thorough investigation of the organization.
It’s not unusual for a password database to be leaked, which is exactly why they should be designed to be “leak proof”. Decent password storage implementations are designed so that the passwords can’t be read even if the password database leaks. The LastPass database leak from a year ago is a perfect example. LastPass’ servers were breached and the password database leaked, but because they were stored properly, there is almost no chance that the attacker discovered the passwords from the hashes in the database.
For businesses of all sizes, the data they store is their business. Many modern businesses exist solely to analyze, publish, and otherwise work with data. The service economy is a data economy. Organizations that don’t simply move data around — manufacturing companies for example — depend on data for sales, for sales design, for production.
If a company loses customer records, sales information, payroll data, operational metrics, marketing analytics, and the data that constitutes their website, its businesses would have a hard time recovering — many businesses that suffer catastrophic losses don’t ever recover. Consider how much of a setback it would be if your business lost a substantial part of its data, and then consider if your current backup plans are in-line with the potential loss.
It’s widely acknowledged that offering HTTPS connections on sites of all different types is a good thing for security and privacy. Encrypted connections prevent eavesdropping, man-in-the-middle attacks, and the altering of data traveling over the connection. However, owners of some types of site — although they may acknowledge the theoretical benefit — think the negatives outweigh the positives. They worry about the cost and complexity of implementing SSL / TLS, the difficulty of managing certificates, and I’ve quite often heard site owners complaining about the potential performance impact of establishing SSL / TLS connections.
From the (arguably insane) debate over strong encryption to the tensions between the US and China to troubling revelations about the NSA, government surveillance has been at the fore of everyone’s mind for quite some time. Especially given what just recently happened with Juniper. I should probably offer some context there.
If you’ve been following the tech news of late, you’ll have heard about a serious vulnerability in the Linux kernel that could allow an attacker to gain root access. The media has treated the story with its usual restraint: headlines abound about the vulnerability of millions of servers and Android phones. I’d like to take a more level-headed look at the vulnerability and the impact it might have on web hosting clients.
If you’re new to the world of small-business websites, you might assume that the space is full of well-meaning honest professionals. For the most part, you’d be right. The vast majority of web developers, designers, search engine specialists, and web hosts are trustworthy, but there are bad apples that small business owners should wary of.
Consider this scenario — one that I’ve encountered many times. A small business owner decides to launch a new website to publicize her business. She doesn’t have a clue how web design or web hosting works, so she hires a “web master” to take care of the whole thing. He registers a domain for her business. He builds a good-looking WordPress site with a premium theme and the copy our business owner supplies.
SSL certificates underpin online security and privacy. Using an up-to-date version of SSL / TLS, they are a practically undefeatable mechanism for ensuring the privacy of data transferred between servers and web browsers. But SSL certificates have another job to do: they are used to verify the identity of domain owners, which needs more than math. It needs, among other components, a group of organizations to validate the identity of domain owners and create certificates — the Certificate Authorities.
According to a recent report from the security firm Sucuri, WordPress’ XML-RPC system is once again putting WordPress users and sites at risk. A flaw in XML-RPC exposes WordPress sites to brute force attacks that are significantly more effective than those using the obvious brute force attack vector, the login page.
Brute force attacks are the least sophisticated strategy an attacker can use to gain authenticated access to a WordPress site. Attackers simply try many different username and password combinations until they hit on one that works. The only real sophistication is how the attacks are automated and the mechanism by which they attempt to verify credentials. Most brute force attacks are carried out by bots — programs that try combinations as quickly as possible on as many sites as possible.
We all know that passwords aren’t a good method of authentication. Complex passwords are hard to remember, simple passwords are next to worthless. And yet, most web developers who log in to production and development servers use SSH with a password.
The dangers are obvious: for even fairly long passwords, which most people don’t use, a brute force attack against an SSH server can prove effective. Passwords are bad at protecting the files that constitute your website, not to mention any sensitive data that might be stored in your databases.
Most readers of this article will have set up SSL/TLS encryption for a website at some point in their career. It goes with the territory for system administrators and site owners. But for the average website owner, the processes is fraught with difficulty and opportunities to make a mistake. Let’s Encrypt — which will become available to the public next month — is a new way of adding domain-validated SSL certificates to a site that aims to make it easy for everyone.
It’s reasonable to ask: does everyone really need SSL encryption? It’s obvious that eCommerce sites and sites that handle sensitive information need a way to protect data that travels between server and browser from snoopers. The case for the average blog is somewhat less clear, but with the advent of ISPs that choose to inject their own advertising into blogs, the proliferation of content management systems that require authentication to post, and the eagerness of certain organisations to track what people are reading on the web, there’s a strong argument that all sites should be protected.
Over the last couple of years, web design has homogenized around a small set of tropes — one-page sites, full-bleed images, parallax scrolling, and what’s come to be known as scrolljacking. I’ve loved many of the sites that adopted this set of design trends, but scrolljacking has never been on my list of things to admire about a site’s design.
I’m not going to single out any particular site, but I’m sure you’ve all experienced a site with scrolljacking. You click on a link and are taken to a site that looks beautiful. You scroll down to see more, only to find that your mouse or trackpad is malfunctioning. The page jerks about or scrolls in slow motion. It doesn’t stop moving when you stop scrolling. Sometimes you scroll and everything disappears from the screen; you have to keep scrolling until content reappears. And sometimes instead of scrolling top to bottom; the whole page lurches sideways!
All software has flaws. Sometimes those flaws lead to security vulnerabilities that put users at risk. Security researchers work to find those vulnerabilities. Responsible researchers report vulnerabilities to developers and give them time to release a fix — and if they don’t, the researcher will release their findings to the public. This is an established procedure. The question is: how should a developer react?
Companies who are told that their products are vulnerable sometimes do not respond well. In some ways, that’s understandable.
A vulnerability has been discovered in a cryptographic algorithm used by tens of thousands of web servers to create secure TLS connections with browsers. As I write, almost nine percent of the web servers in the Alexa Top 1 million sites are vulnerable, as are a huge number of mail servers.
The best way to mitigate the risk of attack is to ensure that you’re running the latest version of your browser. If you’re a server administrator, you should ensure that your server does not support export cipher suites and upgrade OpenSSL and other TLS libraries to their most recent version.
This January, security researchers at Check Point discovered a set of vulnerabilities in Magento that could potentially allow a malicious actor to execute arbitrary PHP code on eCommerce sites, allowing an attacker to create a new admin account or to steal sensitive information, along with any number of other actions harmful to both eCommerce retailers and their customers.
Check Point disclosed the vulnerability to the Magento team, who quickly issued a patch. The patch has been available for more than two months. Last week, in accordance with the doctrine of full disclosure, Check Point released comprehensive details of the vulnerability, explaining how it was discovered, the code flaws that made it possible, and how it can be exploited.
Never mind Ghost, WhatsApp’s broken privacy, or all the bugs surfacing in Windows 8. One of the most enduring security threats on the modern web is something known as CryptoPHP. This nasty little piece of work installs a backdoor onto content management systems by way of an infected theme or plugin; these addons are usually pirated copies of the real ones.
An attacker can then use a connected platform to gain administrative access to the compromised site. This allows them to do … pretty much anything, actually. Worse still, this is one of the most versatile security exploits we’ve seen in a while – it can self-update, makes use of strong encryption, has an application infrastructure that rivals some businesses, and includes a number of backup mechanisms that make it a distressingly insidious presence on the web.
A vulnerability that could potentially allow an attacker to execute SQL commands on WordPress sites has been discovered in the popular Yoast SEO plugin. An update to fix the exploit has been pushed to WordPress sites that have automatic updates turned on, but if you’re still using an older version of the plugin, you should update immediately. Versions older than 1.5 are not vulnerable, but that’s seriously out of date and if you can update to the newest version you should. Oddly, this plugin uses different version numbers for its free and premium offering; we’re using the free plugin version numbers in this post. If you’re a premium user, take a look at Yoast’s post on the topic.
Yoast’s SEO plugin is one of the most popular plugins in the WordPress repository and is installed on many millions of WordPress sites.
Mark Nottingham, the chair of the HTTP working group overseeing the development of of HTTP/2, has announced that the standard is complete and headed to the RFC editor for tweaks before publication as a standard.
HTTP is one of the fundamental technologies underlying the web, and it hasn’t seen a comprehensive upgrade since 1999, when HTTP 1.1 was introduced.
The web has changed enormously since 1999, but the technology that makes the web possible changes achingly slowly. While the tools used by developers and designers for creating sites is constantly moving forward, they have still had to contend with an protocol that was created back when dial-up was all the rage and blinking text was considered a bold design choice.
Branded links are cool. It’s great to be able to get rid of ugly long links replete with strings of tracking and affiliate codes and replace them with short links that contain the name of your company or site. It looks good on social media, and it helps present a coherent brand image to web users. But, however cool they are, short links suck, and they suck because they break online transparency and user experience in some key ways.
Firstly, they’re mostly unnecessary. Back in the day, if you wanted to share a link on Twitter, the link counted towards your character limit. If your link was long, the rest of the text in your tweet had to be very short. That’s no longer the case. Twitter uses its own link shortening technology, and however long the link you paste into Twitter, it’ll only take up 22 characters of your Tweet (it’ll also take up 22 characters if the original URL was shorter than that — all links on Twitter are “shortened”). One of the benefits of relying on Twitter’s own shortener is that it displays the beginning of the original link, rather than a shortened version.
In general, the ability to track users with cookies has been a good thing for the web. Tracking within sites allows us to maintain state, tie together a user’s page loads and data into a coherent session — without that eCommerce as we know it and most other web services would be impossible. Tracking across sites powers the targeted advertising that drives much of the online economy.
Tracking with cookies, however distasteful it may seem to some privacy advocates, gives the user a large measure of control. They can delete cookies, choose not to tracked (with varying levels of compliance from sites and browsers), and they can use an Incognito or Privacy mode that cuts sites off from cookies altogether.
Hey there, folks. Today’s piece is going to be a sort of primer on SSL Certificates. See, most of you probably already understand how important it is that you encrypt your communications. What a lot of you may not know is what actually goes into selecting the right certificate for your business.
That’s where we come in. We’re going to go over all the stuff you need to take into account when you’re choosing an SSL Certificate for your site. Best be sure you don’t ignore them – if you simply blunder out and buy the first certificate you come across, you’ll regret it.
It’s uncontroversial that sites handling sensitive data like credit card numbers should implement HTTPS to protect that data from snoopers. It’s also best practice to encrypt connections for sites that allow users to log in — not only is their data protected as it travels from the site to the browser and back again, but so is the authentication cookie that maintains their session.
But it’s becoming increasingly common for security experts and online service providers to recommend that all sites are encrypted with SSL. Google gives secure sites a bump in the SERPs and its Chrome browser may soon give users a visual warning if sites aren’t encrypted. That doesn’t just apply to the classes of sites for which encrypted content is now the norm, but to read-only sites with no sensitive user data and no logged-in users.
Virtual private servers are one of the most flexible and cost effective ways of acquiring a powerful server without breaking the bank. They’re much more powerful and versatile than shared hosting, and they’re much less expensive than dedicated servers.
You might think I’m biased (and I am), but I believe that web hosting is something that almost everyone can find a use for, and that virtual private servers are the best option for most. An always-connected, remote Linux server environment is an incredibly flexible tool that can serve many different purposes. In this article, I’d like to take a look at three ways a virtual private server could be useful to you this year.
You may have recently seen a story in which it was reported that airline wireless internet provider Gogo was issuing SSL certificates for domains owned by Google. There was a small storm of controversy around the story, because, in theory, issuing such certificates could allow the bandwidth provider to see content flowing over its network that the user assumed to be encrypted.
In the grand scheme of things, the world wide web is a young technology — less than quarter of a century old — but it’s had an enormous impact on the way the we live our lives and communicate with each other. Yet very few of us really understand the technology that forms the world’s dominant platform for cultural expression, business, and communication. Most are vaguely aware of what HTML and CSS are, but very few would be able to knock together even the simplest web page. Even fewer understand what it takes to manage the Linux servers on which the Internet largely runs. I think that’s a shame for three reasons.
Firstly, although I’m not of the view that everyone needs to be a developer, I do think some practical knowledge of coding and web development is useful in any number of ways for almost everyone. Our world is built on software and our communities are built on the web. Building a web site is an excellent way to develop skills that turn internet users into full online citizens who can contribute. Without the ability to understand what lies beneath the surface of the sites they interact with, web users have limited control over their online experience.
If you’re looking to set your business up with a dedicated or virtual private server, you’ve a very important decision to make, perhaps even more important than which hosting company you choose. It involves how much money you’re willing to spend in the interest of convenience. More importantly, it involves how much work you intend to put into your server – and how much control you’ll have over its operation, besides.
I am speaking, of course, about choosing between managed and unmanaged hosting – literally, choosing between having a host run a server for you and running that same server yourself. It’s important that you understand the differences – and strengths – of each approach as well as your business’s resources and needs. Otherwise, you might end up making the wrong choice.
Tracking is the holy grail of the online advertising industry. Randomly throwing advertising at users has a very low success rate. The better advertisers can predict what a user will be interested in, the more likely they are to serve advertising that gets more clicks that convert to more sales. To target advertising, networks need to develop profiles of users, and the most common way to do that is with cookies. A cookie is placed in the user’s browser containing a unique identifying number, and whenever a browser visits a site that belongs to the advertising network, code on the page looks at the cookie. In this way, advertising networks can track users across the web — and if those users are logged in to a service like Google or Facebook, the tracking can be all the more accurate, because they can associate it with much richer data.
In what is becoming a worryingly frequent occurrence, a vulnerability has been reported in the SSL protocol used to encrypt connections between web clients and servers. The good news in this case is that the vulnerability occurs in a relatively ancient version of SSL (so old that it was still called SSL, and not the more modern TLS). The bad news is that the way SSL is implemented on modern browsers and other clients means that the ancient protocol is sometimes still used.
Cutely named Poodle (Padding Oracle On Downgraded Legacy Encryption) and officially named CVE-2014-3466, the vulnerability has the potential to allow an attacker to read plaintext versions of data that should be encrypted.
Tracking is the holy grail of online marketers and businesses that rely on accurate information about their users. The motivation for tracking is hardly ever as suspicious as some privacy advocates would have us believe. Companies use the information to provide better services. Nevertheless, users should be able to decide for themselves whether to allow their online activity to be tracked. That many decide to install tracking blockers and deny the use of third-party cookies is evidence that there’s a proportion of Internet users that dislike the idea of being tracked.
ETags are a method used by some site owners to circumvent user choice where tracking is concerned, and they are an interesting illustration of how tracking works.
In the wake of the critical Heartbleed vulnerability, many security experts are advising that site owners regenerate the keys used for their site’s SSL certificate, create a new certificate using the fresh keys, and revoke their old certificate. Of course, that’s exactly what site owners should do, but there’s a quirk in the way modern browsers handle SSL certs that could potentially reduce the effectiveness of certificate revocations.
An SSL certificate, in simplified terms, ties together a public key pair and a domain name. The private component of the key pair is a secret known only to the site. If that private key becomes known to a third party, then it’s no longer possible to guarantee that data encrypted using that key or a site verifying its identity with that key is really what it appears to be.
The vast majority of people value convenience over security. Properly securing a business’ online presence and network infrastructure takes constant effort. With all the good intentions in the world, if security gets in the way of productivity and efficiency, best practices will fall by the wayside.
The best way to maintain a secure business is to educate employees about the risks and train them in simple procedures they can implement in their day-to-day activities to mitigate those risks. It’s not a watertight system, but maintaining a level of awareness about potential vulnerabilities and the possible repercussions helps to keep people on their toes.
In this article I want to take a quick look at four strategies and vulnerabilities that hackers and criminals use to infiltrate private business networks.
The online economy will be worth $4.2 trillion by 2016. That’s an awful lot of money sloshing around the world over wires and through the air. The criminal market for user credentials and credit card numbers is enormous. For the online economy to work, online retailers and site owners need a way to protect data from prying eyes and verify their identity. SSL certificates are by far the most popular way of achieving both.
Imagine you are a shopper browsing an eCommerce website. You want to make a purchase, but before you do, you need to know that the store you are browsing are who they say they are — otherwise you might be sending credit card details to a criminal. You also need to be sure that no one else is looking at this data as it goes over the Internet.
Attackers exploiting a weakness in the Network Time Protocol launched a DDoS attack the volume of which outstripped last year’s huge attack against SpamHaus.
It’s becoming a depressingly familiar story for web hosting providers. Last year we were all surprised by the size and ferocity of distributed denial of service attacks, which had grown to unprecedented volumes. It looks like the trend will continue through 2014, with the year’s first really large attack being revealed by the CloudFlare DDoS mitigation service. Last year’s SpamHaus attack, which used a DNS amplification method, peaked at data volumes of 300 Gbps. Last week’s attack hit highs of 400 Gbps.
Hackers are very motivated set of people. Whether they’re criminals or security researchers, they love nothing more than finding flaws in your online service and letting the world know about it, especially if you don’t respond in what they consider to be a timely fashion to any disclosure they make.
In a recent and well publicized breach of security, 4.6 million SnapChat user’s phone numbers and usernames were published on the web. SnapChat is a photo sharing service with a spin, allowing the shared images to viewed for a limited amount of time before they are deleted. SnapChat has seen astonishing growth over the last year, and the company’s confidence is high — they turned down a $3 billion purchase offer from Facebook a few weeks ago.
Unfortunately, they didn’t handle the security breach well from a public relations perspective. Firstly, they were warned in advance of the vulnerability in their API, and while they acknowledged the possibility of an exploit, they didn’t do much to fix it. When the vulnerability was duly exploited, they once again released a statement acknowledging it and attempted to reassure users that they were working on fixing the problem.