The Internet of Things is pretty incredible. We’ve got refrigerators that automatically order more produce when they run out. Voice-activated, color-changing lighting. Thermostats that let us control everything about our home with a few taps on our phone.
Category Archives: Misc
The web and server hosting world is full of abbreviations that look as though they were designed to confuse inexperienced hosting clients: IaaS, PaaS, SSD, SSL, VPN, VPS, and many more. It’s especially confusing when abbreviations are similar, but mean completely different things, as is the case with VPN and VPS. I’ve often heard hosting clients say VPN when they mean VPS, and vice versa.
In spite of the ubiquity of social media, email marketing is still one of the most effective ways for new and growing online businesses to get the word out. Almost everyone has an email account. Although they may spend more time on social media than in their email inboxes, consumers would rather receive permission-based promotional content via email than phone calls or social media messages.
As a sysadmin, system monitoring is one of your most important tasks. It’s crucial in ensuring acceptable levels of performance and uptime. To keep your mission-critical software and services up and running, you need to keep an ear to the ground.
It only becomes more essential as your infrastructure increases in complexity and scale. Without both aggregate data and realtime status information, you’re functionally flying blind. If it helps, think of systems administration as juggling.
A year ago on Tuesday, we were shocked and saddened to lose our friend and CTO, Nick Zyren. It was an extraordinarily difficult time for all of us here at Future Hosting.
On a professional level, many of Future Hosting’s values, goals, and dreams were inspired by him. His passion was ensuring our clients and team members are happy and we see that same passion reflected in our team now.
On December 9, we said goodbye to Future Hosting’s CTO, Nick Zyren. On this day, his birthday, we just wanted to take a moment to remember Nick, who left us way too early.
Here’s to you Nick.
If ISPs are allowed to charge content providers for access to their customers, the Internet as we know it may be at risk. The Internet has flourished as an open and neutral environment where everyone has equal access and no one has special privileges.
Imagine if it were possible for wealthy individuals to pay to be given privileged access to highways. Those with the money could have lanes closed to other traffic so that they can get where they want to be without having to deal with the gridlock the ordinary driver suffers through. That would good for the rich and for the people who make money selling access, but terrible for everyone else: people couldn’t get to work, goods wouldn’t be delivered on time, and economic efficiency would be seriously impacted.
GitHub is a treasure trove of useful code snippets, scripts, sites, and applications for anyone who builds websites. Even if you’re not much of a developer, if you run a website you’re certain to find something on GitHub that will add a nifty feature or make some aspect of site management easier.
In this article I’d like to highlight 4 projects that caught my attention on GitHub this month that will be of particular interest for web designers, developers, and anyone running a Linux server.
On December 9, Future Hosting CTO Nick Zyren unexpectedly passed away at the young age of 31. As such, we’ve elected to break from our typical blogging routine to honor Nick’s memory.
Since joining Future Hosting in 2009, the role that Nick played in building Future Hosting from a young start-up to a well-established, internationally diversified brand was a significant one. For anyone that met Nick, it was readily apparent that he cared for all of those that which interacted: be they clients that had become frustrated enough to seek help, or colleagues working towards common goals. Nick will be missed.
The way we use the web has changed out of all recognition since its inception two decades ago. In the mid–90s, a webpage was a simple collection of HTML and static assets, almost all of which could be relied upon to reside on the same server. Loading a webpage was a simple matter of establishing an HTTP connection with that server, which would send the necessary files. A modern web application functions quite differently, with dozens of assets spread across multiple servers to be delivered at different times. HTTP 1.1, which was designed for bulk data transfers, does not perform well when it is tasked with the transfer of multiple small files in different locations.
In our Black Friday promotion, all new Virtual Private Server accounts are eligible for a 25% discount with the promotional code: BF25.
As the holiday season approaches, it may seem like 2013 is winding down and it’s time to leave new projects until the New Year, but, in fact, December is the perfect time to begin laying the foundations of next year’s projects.
That’s why we’re excited to be offering new customers a 25% discount on all of our Virtual Private Server plans. A VPS is the perfect platform for developing, testing, and eventually hosting a web site or application. If you’re going to need extraordinary hosting for years to come, with unparalleled support and unbeatable reliability, check out our virtual private server hosting plans, but before you do, here’s why we think you’ll love our VPS plans.
User authentication presents a number of problems for web developers. As the web has become richer, moving from static sites to interactive services, the need for identifying users has become prevalent. In theory, the problem is not a difficult one to solve: the user presents an identifying token, a username, for example, and a shared secret such as a password, which are matched against entries in a user database. Unfortunately, in practice, there’s a lot that can go wrong, from insecure transmission of tokens to the database breaches we hear about all too often. In fact, many security experts advise against sites attempting to implement their own authentication procedures if it can be avoided. There is too much at stake and the chances of making a mistake are too high to risk it.
Single sign-on services offer an alternative to self-designed log-in systems. Anyone with a social media account is familiar with how single sign-on services work. In that case, the social media platform acts as the identity provider, verifying the identity of signed-on users for the service provider. Single sign-on provides a number of benefits, but it isn’t an unproblematic authentication mechanism.
The Benefits Of Single Sign-On
The obvious benefit to developers of using a single sign-on service is that they merely have to implement code to link their service to to the authentication provider’s service, Facebook Connect, for example. That process is much less complex and time-intensive than building an authentication system from scratch. It’s also much less likely to result in a flawed authentication system: Facebook and the other SSO providers are likely to have significantly more resources to invest in getting it right than the average web service startup. An additional advantage is that web services don’t have to provide their own support for lost or forgotten usernames and passwords.
While a managed server running Linux is very secure compared to the alternatives, no operating system is invulnerable.
When it comes to choosing an operating system to use on a server, security is of paramount importance. General scuttlebutt has it that Linux is vastly more secure than alternatives — alternatives that aren’t based on Unix, at least. To a degree, this is true, but it’s far from the case that Linux is completely invulnerable. To think otherwise is to risk taking a complacent attitude to server security, which can lead to server owners being taken unawares when they end up being hacked.
We’re going to cut through the misleading flimflam promulgated by fanboys and the ill-informed so that you can make informed decisions about the security issues come with running a Linux server.
The shift in computing habits from desktop to mobile over the last few years has been nothing short of staggering. It may be hyperbole to talk about the post-PC world, but we’re rapidly approaching the time when the majority of Internet use occurs on mobile devices. Some countries are already there, particularly in the developing world, where infrastructure deployment for wireless access outstrips traditional wired connections.
Earlier this year, it was reported that over 40 percent of Internet time in the US was on mobile devices. The growth of mobile devices as a primary point of access to the Internet presents numerous business opportunities for web developers and designers, as well as mobile app developers. However, at the current rates of growth, demand is going to outstrip available bandwidth. The technology currently in use and the spectrum available for wireless communications is limited. Anyone who has attended a conference where large numbers of people are trying to get online either through the venue’s WIFI or over 3G and 4G connections can attest to the fact that bandwidth is a finite resource.
Linux has a well deserved reputation for being an extremely secure operating system when compared to the alternatives. Linux and other operating systems based on Unix have a permissions model that makes it very difficult for malware to carry out the sort of changes to the system that would be useful to malware creators, so, for the most part, users of Linux servers don’t have to worry about “getting a virus”.
However, it’s not true that no malware exists for Linux, nor is it the case that Linux and the software it runs is totally impregnable to hackers. Malware that attacks the operating system itself is mostly “proof-of-concept” software that has a very low success rate in the wild, but, as with all software, vulnerabilities are occasionally discovered in the kernel or in the software that Linux servers run: the Apache web server, the PHP scripting language, or the SSH server, for example. We’re going to have a look at why you might want to install Linux malware scanners on servers, and run through some of the available options.
Getting to grips with the basics of Linux server administration isn’t too difficult. The fundamental command set and its options can be picked up fairly quickly. But if you just skim the surface and learn only what’s strictly necessary, you’re missing out on a lot of the power that the Linux command line has to increase productivity.
Linux, as a Unix-based operating system, has a venerable history stretching back more than 4 decades, and as an OS favored by geeks and hackers it has accumulated a huge number of optimizations and tools that make life on the command line more efficient. In the time-saving command line tricks we’re going to look at today, you’re definitely going to find a few that will have you facepalming when you think about how many times you’ve gone the long way round to achieve something that can be done much more quickly.
Make The Most Of Your Command History
KSplice Uptrack enables the application of security updates to a running Linux kernel without server downtime.
Back in the day, epic uptime was an indicator of a successful sys admin. What could speak to a server’s smooth running and competent administration better than an uptime count that ran to years?
However, as impressive a metric as that might have been, there’s some disagreement as to whether it is an indicator of success: it’s usually enough to avoid actively screwing things up or having a hardware failure to keep a Linux server going forever. More importantly, a huge uptime for a Linux server is almost always a big red flag that reads “Unpatched Kernel”.
KSplice, originally developed by Jeff Arnold and based on his master’s thesis, is a tool that enables an active Linux kernel to be updated without the need for a reboot.
This post has been contributed by Evan Daniels of DNS Made Easy. Evan is the technical writer for DNS Made Easy, a leading provider of DNS hosting. Follow DNS Made Easy on Twitter at @dnsmadeeasy, Like them on Facebook http://www.facebook.com/dnsmadeeasy, and check out all the services they offer on http://www.dnsmadeeasy.com.
Configuring DNS servers is not an easy task. There’s a degree of complexity involved that often isn’t appreciated. Unfortunately, that means there are large numbers of misconfigured DNS servers on the net, particularly servers that are set-up to act as open recursive resolvers.
The days of data silos are done, or at least they should be. Data is most valuable when it’s connected and available to be combined in ways unforeseen by its original collectors. In recent years, we’ve seen a huge number of new APIs created to provide data and services that web developers can integrate into their own sites, often for free.
In this article we’re going to take a look at seven Application Programming Interfaces that every web developer should be aware of.
We’re sure you know about this one, so we’re putting it in first as a taster. If you remember the bad old days where websites were limited to a small range of “web safe” fonts, the current proliferation of beautiful, elegant, typographically awesome sites must be very welcome.
Google Fonts provides free access to hundreds of fonts that can be included on any website. The API is simple to use, and Google provides an interface which will let users browse the fonts before spitting out the necessary code for them to copy and paste.
The web is full of unstructured data that’s difficult to wrangle into a form that’s usable by web sites and apps. The Yahoo! Content Analysis API provides a free service for detecting entities, categories, and relationships within data and ranking them by relevance.
Although not free, the Google Translate API is a great service for dynamically translating between thousands of different pairs of languages, as well as identifying the language of submitted text.
We’ve given Google enough love, so for geographic data and mapping we’re going to point you in the direction of the OpenStreetMap API instead of Google Maps. OpenStreetMap has several APIs, including one for fetching and saving raw geodata, but web developers are more likely to find the OpenLayers API useful. It provides facilities for embedding maps with web pages.
Panoramio is a service that geolocates photos, allowing users to display images in their geographical context.
Decibel Open API
The Decibel Open API is an interesting new approach to content management system design that allows developers to expand on the Decibel CMS platform by creating apps and code that access the Decibel framework. It’s a sort of hybrid cloud approach to content management design as an alternative to the usual SaaS offerings with their inherent limitations.
The Bitly API provides programmatic access to many of the features of the company’s URL shortening and analytics service.
Which APIs are you using in your web development? We’d love to hear what you guys are using to create your sites and apps, so feel free to share in the comments below.
A recent change to the default configuration of the WordPress content management system leaves thousands of websites open use by malicious parties to instigate a distributed denial of service attack against innocent sites.
Online security company Incapsula has released details of how it mitigated a DDoS attack against its clients that used the pingback mechanism of WordPress sites to flood target sites with thousands of requests, potentially overwhelming their ability to meet legitimate requests.
According to the report, the attack involved at least 2,500 sites, including sites like trendmicro.com and zendesk.com. Unlike the recent DDoS attacks that used open recursive DNS servers to amplify the amount of data involved, this exploit isn’t capable of significantly amplifying data, but because WordPress is almost ubiquitous and almost all WordPress sites are vulnerable, it wouldn’t take much effort to recruit large numbers of sites for any attack.
Pingbacks are intended to be a mechanism for automatically notifying site owners when another site links to them. When the originating site creates a link, WordPress sends an XML-RPC request to the linked site, which checks that a link exists and records the pingback.
This post has been contributed by Graeme Caldwell — Graeme works as an inbound marketer for InterWorx, a revolutionary web hosting control panel for hosts who need scalability and reliability.
Scalability and reliability are sometimes thought of a separate aspects of infrastructure and design. However, it is possible to consider reliability and scalability as complementary axes, both contributing to the overall goal of building a robust platform.
One of the key factors in maintaining reliability is having redundant capacity. A server cluster with limited redundancy is always at risk of either having demand exceed capacity, and thereby slowing down the whole system, or of having one node in the network fail, and so throwing extra demand onto nodes that are already stretched to their maximum, also resulting in slowdowns and poor performance.
A scalable server cluster is one that can be easily expanded to accommodate additional load as a business grows. Designing for scalability by using multiple inexpensive and less powerful nodes has considerable advantage over the alternative strategy of having few very powerful nodes. Implementing a server cluster behind load balancers abstracts provisioning of capacity from the point of delivery, which means that additional capacity can easily be added on the back end. Additional database servers can be added, caches can be interposed between nodes and the load balancers to improve performance, and extra web servers and file servers can be added as needed. As has been said, architecture is properly designed when it can be scaled by simply adding more of the same stuff ad infinitum.
Everyone is familiar with that mounting sense of frustration as they watch mysterious URLs spin by in the browser status bar while waiting for a slow-loading site. In fact, research shows that many site visitors won’t bother to wait if a page takes longer than 3 seconds to load.
According to a recent report in ComputerWorld, the Bring Your Own Device trend, which is seeing increasing popularity in the corporate world, is failing to make inroads with federal CIOs, particularly those in the defence sector.
BYOD, driven by the growing proliferation of mobile devices and software as a service, has seen gradual acceptance by CIOs. That acceptance has ranged from enthusiastic to cautious in the extreme, but many recognize the benefits.
BYOD can result in significant cost savings. Infrastructure procurement costs are reduced, as it becomes unnecessary to provide laptops or mobile devices to employees. Fewer devices to manage also leads to simplified infrastructure, and lower costs of device management, support, and troubleshooting.
There are also advantages to be gained when developing applications suitable for BYOD. Generally, applications designed for BYOD can take full advantage of cloud and virtualization technologies, making a virtue out of a necessity, and creating web application interfaces that run on managed hosting, rather than desktop applications that can be more complex to administer, and keep up-to-date and secure.
Depending on the industry in question, BYOD can help raise levels of employee satisfaction and retention, as well as reduce training costs. People tend to have strong preferences for particular platforms and devices; forcing them to use legacy low-powered machines with an unfavored operating system while they have a MacBook Pro they’d like to be using, can be a considerable cause of dissatisfaction.
In spite of these benefits, there are also significant downside to BYOD; most pointedly for federal agencies is the security issues that arise when end-user hardware isn’t controlled centrally.
If a mobile device that is wholly owned by federal agencies is lost, then the data can be remotely wiped, but on devices where personal data is mixed with agency data then wiping is not so clear cut.
“There’s the issues of what if I wipe your device and you lost all the pictures of little Susie and little Johnny and they weren’t backed up? We’re going to have to have some policies that go into place with this and figure that piece out. Having full MDM (mobile device management) capability across the device is absolutely key.” says Coast Guard CIO Rear Adm. Robert E. Day Jr.
Allowing access to unsecured mobile devices and laptops also opens up the possibility of exploitation by hackers. In a recent security incident, Facebook staffers had their personal laptops hacked. Hackers gained access through a Java exploit that had been placed on a hacked site.
For federal CIOs, who regard data security as their primary responsibility, the possible benefits of BYOD are insignificant when compared to the potential security issues.
For many, mobile devices are becoming the primary means of accessing the Internet, and even for those who prefer to access sites through desktop browsers, smartphone and tablet use is growing. While the speed of mobile connections has vastly improved with the introduction of 3G and 4G networks, carriers are still imposing relatively meager data limits on their users. Keeping data use to a minimum without compromising user experience is essential to developers who intend to take advantage of mobile.
Compression technology is a key factor in providing a data-efficient, low-latency experience to mobile users. In an effort to make better compression technology widely available, Google have released the code for an improved compression library under the Apache 2.0 licence. Zopfli, developed by Googler Lode Vandevenne, uses the same Deflate compression algorithm as the popular zlib library, but promises output files that are 3% to 8% smaller.
Lode described Zopfli as:
“an implementation of the Deflate compression algorithm that creates a smaller output size compared to previous techniques. The smaller compressed size allows for better space utilization, faster data transmission, and lower web page load latencies. Furthermore, the smaller compressed size has additional benefits in mobile use, such as lower data transfer fees and reduced battery use.”
Zopfli is best suited for compressing static content that can compressed once and served multiple times, because, although it can produce smaller output files, it does so at the price of considerably increased processing time — up to 100x that of zlib, making on-the-fly compression of dynamic content unfeasible.
However, where processing is cheap and data is expensive, the additional overhead could be worth the cost. Although Zopfli incurs a penalty during the compression phase, decompression is no slower than files compressed with other implementations of the Deflate algorithm.
Because Zopfli is based on the Deflate algorithm, it is bit-stream compatible with many of the decompression libraries already built into browsers, including zlib and gzip so it could be deployed on a site immediately without backward compatibility concerns.
In related news, Google appear to be working on a compressing proxy for Chrome similar to Opera’s Turbo functionality or Amazon’s Silk browser. The proxy server will apparently make use of Google’s experimental SPDY protocol, which is intended to reduce latency.
As mobile data prices continue to be a barrier to providing the sort of rich experiences that mobile users demand and mobile devices are capable of delivering, we can expect to see further investment in compression technologies in the future.
Many small and medium businesses don’t have the resources to pay for system administrators to keep on eye on their servers round the clock. Nevertheless, servers do need to be up 24 hours a day, and 7 days a week. A good hosting provider will take care of the fundamentals, but for the tweaks and modifications that keep a business’ infrastructure in top form, sysadmins, whether they are dedicated staff or a startup’s founders, need to have round-the-clock access.
It’s no fun to lug a laptop around or be tied to a desk, so it’s fortunate that there is large number of mobile applications that will let the roving admin monitor and manage their servers while on the move.
We’re going to take a look at five of the best apps for managing Linux servers from your Android phone. iOS and Windows Mobile users shouldn’t feel neglected; we’ll be bringing you a future article with the best sysadmin tools from your favorite platform soon.
An SSH client is usually top of the list of tools that any sysadmin will need in their kit. Connectbot is one of the best. It can handle multiple SSH connections and create secure tunnels. For a quick reboot or a bit of config file hacking, Connectbot is the perfect choice.
Many of you will already be familiar with the Pingdom suite of monitoring tools. If you’re not, Pingdom is a service that runs periodic checks on a server to make sure that all is well, and includes a number of tools to help diagnose problems. The Android app features push notifications and access to server health information and response time stats.
For the Android App to be useful you’ll need to set up a Pingdom account (not free).
Cura is a set of server admin tools. It includes a terminal emulator, syslog module for reading server logs, a monitoring module for displaying hardware status, and access to Nmap. It works via SSH so there’s no need to install additional software on the server.
Cura also has some handy security features. Phones have a tendency to get lost, and the last thing a sysadmin wants is a stranger having access to their servers. Cura allows users to remotely wipe its database with a predetermined SMS.
If you are planning on hacking config files and scripts from your mobile device, you certainly don’t want to be doing it from the default keyboard, which lacks control and tab keys among others. The Hackers’ Keyboard restores these necessities and puts the punctuation keys back where you expect them to be.
When all else fails, or it’s the weekend and the sun is already well over the yardarm, it’s time to deploy that other sysadmin standby, the excuse. SysBull very handily generates excuses as to why things aren’t working and why they are likely to stay that way for a while.
Those are our five favorites, but we’d love to hear what you guys are using. Feel free to share your tools of choice in the comments.
Email forwarding is a popular feature among users who prefer a single email account for receiving emails from different email addresses that they own. However this convenience might contribute to hamper your email server’s reputation. In addition to the increased load for the server to handle when emails get rejected by your remote email account mail server (for various reasons) which get filled up in the queue, it may also get your mail server blacklisted.
Another potential problem is when the remote mail server uses SPF for validating the sender. Since email forwarding works like impersonating the sender as the receiving mail server just forwards the email to maintain the sender address, remote email servers with SPF will not allow the email to go through as the sending mail server could not be validated to the actual source details.
* Forwarding “Catch-all emails” to an external mail server should be avoided. The best solution is to set the default box to :fail: so that messages are rejected and bounced back.
* Instead of using email forwarding, create an email account instead of email forwarder address and use POP feature in your main email account (like Gmail) to retrieve those emails. Email forwarding under the same domain/server will be fine.