IT has had its ups and downs over the past couple of years, as economic conditions have required companies to not only cut costs but sometimes to cut spending altogether. While spending on most IT projects fell significantly, the printing business has not skipped a beat. Historical data indicates, as shown in the image below, that printed pages have grown every year since network-connected printers went mainstream in the early 1990’s.
So, with printed pages on a relatively constant upward trend, I get asked all the time about how best to save money in this space. Below are some of my most common suggestions.
10 Ways to Save Money through Active Print Management
1) Stop printing altogether. Though it may sound crazy, the number-one way to spend less on printing is to stop printing, or at least reduce your printing of non-essential documents. Reducing print volumes is hard because we have trained ourselves to print as a part of our daily routine. However, by performing a thorough review of your business processes, identifying where and when printing occurs and recommending alternatives, you can realize major cost savings across the company if even small adjustments are made by every employee.
2) Tighten supply chain management of toner and other consumables. There are technologies in the market today that allow you to order toner and other consumables in a just-in-time fashion, delivering toner and other consumables directly to the person responsible for installing it in the printer. This strategy can eliminate toner closets thus freeing up cash that would otherwise be tied up in pre-ordered consumable items.
3) Outsource non-strategic printing. Non-strategic printing refers to large jobs that typically require special treatment such as color, gloss, weighted paper and/or binding. This can include large format plotters, training materials, infrequent large jobs and finished marketing materials to name a few examples. Although outsourcing sounds expensive, depending upon your printer use it is very possible that this strategy may actually cost less than leasing, maintaining and using the printer hardware required to meet these occasional needs.
4) Track your costs. Create a document that tracks and reports your ongoing printing costs. For small operations, this can be as simple as putting together a spreadsheet and updating the source data every month based on output and supplies used. For larger organizations, however, manual cost management is much more challenging. That being said, just like in personal finance, if you aren’t tracking your costs you cannot affect them.
5) Get an outside opinion. For a number of reasons, most organizations do not have their arms around their printing costs, and often the day-to-day decisions about print strategy are not made at a high enough level within organizations. Getting an outside opinion about your printer environment from an expert such as your manufacturer or VAR may help you find some areas for savings that can be implemented quickly and with little risk.
6) Replace color printers with monochrome printers. As a rule of thumb, color pages cost eight-to-ten times the amount of monochrome pages and most documents do not need to be printed in color. Based on your print volumes, you may find an immediate and significant ROI by moving away from color on most machines. For those documents that require color, keep several, low-TCO (Total Cost of Ownership) color printers in strategic locations throughout your office environment.
7) Consider compatible or non-OEM toner. Though OEM toner is guaranteed to give you the highest quality output, most non-OEM toners offer near-OEM performance at significantly lower prices. Keep in mind that a non-OEM or compatible toner strategy requires testing and a commitment to work through occasional toner defect issues. Also, it is common to use compatible monochrome toner while still using OEM color toner for priority print jobs.
8) Investigate alternative printer manufacturers. It’s possible that your organization has been purchasing the same brand of printer for many years, for reasons such as standardization, driver compatibility, availability of supplies and familiarity with the technology. Today’s printers, however, incorporate very similar technologies across manufacturers and many have universal print drivers. Reevaluating your standards and using TCO as your baseline for making a decision may lead you to a different brand of printers that will work more efficiently for your needs.
9) Remove locally attached printers from the environment. Locally attached printers — those that serve only one user — are typically the highest TCO printers in an organization. With only a few exceptions, it is a best practice to push printing to network- connected, shared printers. This strategy can reduce your dependence on high-cost supply items, help managers better evaluate the printer use of subordinates and help you eliminate SKUs from your supply chain management. Since every printer users a different toner cartridge, eliminating SKUs will ease supply chain management issues experienced by purchasing departments. Less SKUs means less work.
10) Invest in print management software. One solution that can assist you in many of the strategies above is to install print management software. Like your other infrastructure-monitoring applications, print management software surveys your organization’s overall print environment and provides you with the data and reports that you need to make changes that directly affect output and costs. Without print management software, having a complete view of your print environment can be a time-consuming and difficult task. For large organizations with 100 or more printers this is an ideal solution. At MCPc, we’ve seen savings up to 8 - 12% from customers that implement print management software along with active monitoring and strategy.
Following one or more of these will help reduce costs and cut spending at the same time. Which strategy works best for you? Please feel free to share your best practices by posting a comment. I look forward to opening up this discussion with you.
Jeffrey Goldstein is Senior Consultant at MCPc and is responsible for the delivery of hardcopy and value-added services within the Lifecycle Management Group. Connect with Jeff on LinkedIn.
Solid-state disks (SSD) are the next major step forward in storage technology. Think of the SSD as you would the more familiar flash drive (memory stick), but in a conventional disk drive form factor. However, even though an SSD might look physically similar to a conventional hard disk drive, the SSD does not employ spinning metal disks at all. Rather, the SSD utilizes erasable, writeable, cell-based memory chips that can store data reliably even when they’re powered off. You may see the acronym NAND used in descriptions of SSD technology [NAND = Not AND (electronic logic gate)].
Vastly improved reliability and performance are the main attractions to SSD technology. When looking at the two images below, take note that unlike the conventional disk drive on the left, the SSD has no moving parts — this is the major reason for the increased reliability offered by SSD technology.
A Little History
I remember back in the early 1980s when the company I was working for bought their first server… for approximately $10K. About half of the cost of that server was spent on the single, internal 1-GB hard disk drive (no that is not a typo, 1 GB!). It was a 5.25” form factor, full-height SCSI disk drive — which is beastly by today’s standards — and my colleagues and I wondered if we would ever use all of that space.
Of course the ubiquitous, magnetic hard disk drive has come a long way since the early days of 10-MB, 20-MB and 40-MB personal computer disk drives, which typically used the RLL (Run Length Limited) and MFM (Modified Frequency Modulation) encoding schemes that are completely foreign to most computer professionals today.
These seemingly humble beginnings, and the ever-present requirement for increased drive performance and capacity, have brought us to the comparatively astonishing hard disk drive capacities, performance, reliability and small physical size of modern storage devices. Consider for a moment that quite a few people are walking around with literally gigabytes of music, video and pictures in their pockets stored on devices that fit in the palm of their hands! Lend further consideration to the fact that many of these storage devices are not magnetic hard disk drives, but rather chips or SSDs.
Sidebar: Does anyone remember watching an old episode of the original Star Trek television show where Spock inserted a small rectangular piece of metal into his computer console, and the object was apparently a storage device with no moving parts that contained an enormous amount of data? Well guess what folks — we’re pretty much there!
Current State of Enterprise-Class Disk Storage
In the enterprise, the currently available magnetic hard disk drives are offered with FC (Fibre-Channel), SATA (Serial Advanced Technology Attachment) and SAS (Serial Attached SCSI) connectivity. Older IDE (Integrated Drive Electronics) and ATA (Advanced Technology Attachment) technology can still be found in PCs, and conventional SCSI has pretty much been superseded by SAS.
There is a variation available from a couple of manufacturers that mates an ATA drive with FC connectivity, and this is referred to as FATA. This variation in drive type and connectivity speaks to striking the balance between drive performance, reliability and cost, and is predominately described using the Storage Tier Model.
Tier 1 is described as the highest performance, most reliable and consequently most expensive storage tier. Tiers 2 and 3 take steps down in performance and reliability, thus lowering drive cost. The savings can be significant, so it is worth making the effort to categorize data into the storage tier model.
Generally, an organization’s most critical and most frequently accessed data will reside on Tier 1 storage devices. Data that is static, infrequently accessed or judged non-critical to daily operations is generally stored on Tier 2 or Tier 3 storage devices. Software is available to assist in the automation of this ongoing categorization of data.
Solid State Disk Storage in the Enterprise
So what about the potential of using newer SSDs for enterprise-class storage requirements, and where do they fit in the storage tier model? For some time now, SSDs could only be found on the periphery of the storage market, serving those who needed significantly more performance than was available in conventional magnetic hard disk drives. (Demanding streaming video applications is one such usage that comes to mind.)
The issue with SSD technology to date has been the storage density-to-cost ratio. In other words, SSD technology is quite expensive as compared to conventional spinning disks. This is changing, although slower than many would like. In fact, SSD drives of reasonable size (100GB+) can be found as options in high-end notebook computers from several popular vendors.
SSD in a MacBook Air
When considering enterprise storage, many people use a cost-per-GB or cost-per-TB equation to make purchase desicions. As of this writing we are just on the verge of being able to justify the cost of SSD technology in enterprise-class servers based upon IOPS (Input/Output Operations Per Second) performance.
Consider that the typical, conventional enterprise-class hard disk drive can deliver roughly 150 to 300 IOPS, compared to SSDs that can deliver approximately 100,000 IOPS, and you begin to realize how some quick math can justify the cost differential for those mission-critical applications wherein a time advantage translates immediately into a business advantage. An example would be the healthcare or financial industries.
The writing is clearly on the wall — as the capacity-to-cost ratio of SSDs continues to get closer to that of conventional hard disk drives, buyers will certainly invest in the newer, faster and more reliable technology. As you may have already surmised, this transition is expected to happen at the Tier 1 storage level first, and it may take significantly longer for SSDs to make their way into the Tier 2 or 3 levels of the storage hierarchy where cost is the paramount consideration.
It is also worth mentioning that adopting SSDs is not simply a matter of swapping out existing hard disks for the new SSDs. New controllers and other electronics are required, and consequently SSDs will more than likely find their way into enterprise environments as part of a larger purchase, such as a storage array, a server or a specialty appliance.
If you have not already implemented, or lent consideration to, storage-tiering solutions, you should soon. Why? For most businesses and organizations (and yes, even individuals!), data continues to grow at an unrelenting pace.
One reason is that significantly more transactions are done electronically versus on paper these days. For example: Medical records are now stored electronically, and radiology images are many and large in size (and are also stored electronically). Old paper documents are being converted and stored. Even email has become a mission-critical application in many business environments where it wasn’t just a few years ago, and it presents many storage challenges. Corporate databases are growing in number and size.
We could go on and on, but the point is made that storage needs are continuing to increase, and therefore we would be wise to get a handle on the management of all of this data.
Implementation of a storage management, or tiering, solution now can prepare you for the consideration of implementing SSD storage in the future for your Tier 1 or mission-critical requirements when the time is right. Consequently, it will be much easier to justify the cost if you can demonstrate that you have the data storage environment reasonably under control. Rest assured, adopting a storage management policy and getting ready for SSD in your environment is worthy of your time because technology moves fast, and SSD will be pervasive in the enterprise in just a few years. Are you ready?
Perry Szarka is a Solution Consultant at MCPc with expertise in data storage and network infrastructure. He works closely with clients to understand their business objectives and discover solutions to help them achieve their goals.
Years ago, mobile computers were the exception in most businesses. They were overpriced, under-powered, and you had to meet very special requirements —such as spending a high percentage of your time on the road — to have them. Rather, most business data was stored on desktop machines that remained in the building at all times and was protected behind multiple layers of security such as:
- Network Firewalls
- Network DMZ
- The walls of the building
- The physical security system of the building
- The security guards
- The security gates
Each of the above attributes represents an obstacle that must be overcome for one to access the data that is contained within your business’s on-site computers and network. If the network is properly tightened and hardened and your physical security is what it should be, then you have a fighting chance to keep your data secure. With this kind of security, you know what lines you need to defend — the doors need to be locked, the windows need to be shut and ports be secured and/or closed.
What I mean to say by all of this is that the battle lines are obvious and defendable. But what happens when someone can pick up a computer and take it outside of these layers of protection? What happens when all of this inherent security can no longer effectively protect your company’s information? What happens when the battle lines get blurry?
The Mobile Workforce
In the third quarter of 2008, laptops shipping from manufacturers surpassed desktops for the first time in the history of the industry. The workforce is becoming mobile and it doesn’t seem to be trending in any other direction. If this is the future of our workplace environment, why and how do we deal with the inherent problem of securing the data on these mobile computers?
The “why” question is easy to answer: It is the obligation of every business to protect the private information of their customers. This holds true for every industry and every sector. Everybody has heard of HIPAA, Sarbanes Oxley and GLBA. However, most people don’t know that as of December 2008, forty-four states, the District of Columbia, Puerto Rico and the Virgin Islands have enacted legislation requiring notification of security breaches involving personal information.
This means that in most of the United States, if a suspected data breach occurs it must be reported. This makes mobile computers without full disk encryption high-risk assets. For example, an incident reported by the Department of Veterans Affairs in 2006 involving the personal information of 26.5 million veterans had an estimated cost of 1.59 billion dollars to remediate. How much do you think a good full disk encryption solution would have cost them? This isn’t just an issue in the United States either — in the United Kingdom, for example, they have the Data Protection Act of 1998.
As far as the “how” is concerned: For businesses using mobile devices, full disk encryption is a security practice that should be implemented. Full disk encryption is the process by which your entire hard drive is run through an encryption process. Once complete, the hard drive is unreadable without the proper decryption keys. When the proper decryption keys are presented, only the data that is needed at the time is decrypted and presented to the user, thus keeping the process of accessing data fast and reliable while increasing security exponentially.
It has become quite evident that mobile computers are here to stay and the task of properly securing those computers should be directive number one. The first step in a proper mobile data security plan should be full disk encryption. Though it is not the only security measure to be taken, it certainly is the top priority for most organizations. It is the only way to ensure not only peace of mind, but also full compliance with all necessary regulatory bodies. Full disk encryption is the first line of defense for your data because it protects your data during the times you are not directly engaged with your mobile computer. These are the times loss and theft are more likely to occur.
Full disk encryption is not the only mobile data security technology that should be considered. There are technologies like personal software firewalls, file level encryption for highly sensitive data and GPS based asset tracking systems, but they all speak to point problems. The most secure systems must have a strong foundation and that foundation is full disk encryption.
For more information on the reality of data breaches visit: http://datalossdb.org/
Jason Dell is a Converged Network Solution Consultant at MCPc, and is responsible for developing and programming custom solutions for clients. His expertise includes network security and security for mobile devices in the enterprise. Connect with Jason on LinkedIn.
Every company, regardless of size, needs connectivity to the “outside world” for voice and data. Small offices typically use simple analog lines and DSL or cable Internet for cost effective connections. Larger offices need higher capacity for voice connectivity and higher bandwidth for data applications such as Voice over IP (VoIP), video or other data applications. Let’s review the options, how they scale and the benefits of the different solutions.
The basic voice connection is a simple analog line, commonly referred to as plain old telephone service (POTS) line. Virtually all phone systems accommodate these circuits and companies who use them run local and long distance service over them. Local phone companies still offer features such as call forward, roll over, caller ID and call waiting for a fee. Other businesses may still use a service called CENTREX lines, which used to be popular when phone systems were too costly for smaller companies.
At some point, as a company grows — possibly when you get to 8 or 10 analog lines — it may be more cost-effective to upgrade from analog service to digital service. Most growing companies will migrate to a PRI (Primary Rate Interface), which is a digital circuit that provides 23 digital trunks for inbound and outbound calling. A similar service is a DS-1 or T-1 line, which provides 24 digital channels.
PRIs and T-1s are used for companies that do a higher volume of calling. This is because PRIs and T-1s offer more trunks (lines) for less cost as well as lower long distance rates.
One thing to be aware of with PRIs is whether the service is “measured” or “non-measured.” “Measured” means that there is a limit on the number of local calls that can be made without a per-minute fee, while “non-measured” means that all local calling is included in the monthly fixed cost. In either case, long distance calls are billed on a per-minute basis.
When comparing PRI costs, remember that a lower cost might mean it is a measured circuit and you will end up paying for local calls. In most cases, a non-measured circuit is preferable because for a slightly higher monthly fee you do not have to worry about how many local calls you are making.
Voice circuits can grow from PRI or DS-1 to larger voice circuits such as DS-3 or OC-3 that high-end call centers or very large companies use.
Data is where the largest growth and changes have been made in recent years. Carriers have upgraded their networks to deliver high-speed data at very competitive costs.
When we refer to data, we are typically referring to the connections to the public Internet or a private network connection between offices and/or cities. This becomes important when companies want to run voice and video over their networks. The circuits must have the bandwidth to handle the traffic and more importantly, they must have QOS (quality of service) or COS (class of service) to allow the prioritization of video and voice traffic over simple data traffic such as Internet browsing or email. This makes for a much clearer connection and is imperative for eliminating jitter or latency on voice and video transmissions.
Simple Internet access comes in many flavors. On the low end you have “shared services” such as DSL or cable modems. These are low-cost solutions that can be very effective for small offices.
Beyond those services we move into the traditional Internet circuits. These can be T-1 (1.5 Mbps) or multiples of T-1s (3.0 Mbps, 4.5 Mbps, etc.), and can reach up to and beyond DS-3 (45 Mbps) speeds, but typically require that you terminate the circuits with your own equipment such as routers.
Better yet are Ethernet solutions which do not require client hardware (routers) and are delivered as an Ethernet hand-off. These are very scalable and flexible and usually start at 10 Mbps and go up from there, most commonly up to 100 Mbps.
Are you confused yet?
Sometimes it is confusing when you compare the low-cost cable Internet service you may have at home that is 10 Mbps to a commercial T-1 that is only 1.5 Mbps, and you may wonder why there is such a difference in cost for a seemingly lower speed circuit? The main reason is due to the fact that in most cases, DSL and Cable solutions are shared services, meaning that many other customers share your connection, and may use the same access that you have at different times, therefore you have the potential to experience disparate performance. (Have you ever noticed at home that sometimes your Internet is really fast and other times it’s slow? Well, that’s why!)
For business circuits, the access is not shared and is instead totally dedicated to that one customer so the speed is fixed and symmetrical (same speed up and down). The up and down speeds on shared services are typically different, i.e. 3 up – 10 down. This is most prevalent in DSL, for example, ADSL or Asynchronous DSL (speeds up and down are different) verses SDSL or Synchronous DSL (speeds up and down are the same). SDSL is sometimes called Commercial Grade DSL.
Wireless data connections are also becoming popular and carriers such as Lightyear, Verizon and Sprint have offerings that are cost effective and have decent bandwidth.
WAN (Wide Area Network) Connections
Companies that have multiple locations need connectivity. Often, the connections must be compliant with industry regulations relating to security (healthcare, financial, and so on). The simple way to do the connections is through Virtual Private Network (VPN) connections over the public Internet. As long as the site has an Internet connection, a secure VPN can be established using firewalls or other similar hardware. This is the most economical solution for WANs, however it has its drawbacks.
As mentioned earlier, the challenge with VPNs over the public Internet is that there is no QOS or COS, so all packets travel at the same priority. Voice or video do not have priority over any other data, often causing quality problems. For this reason, VPNs are not a solid solution for any site that has more than a few employees or where the demands for voice and video are high.
Options beyond this are to install either point-to-point private networks or MPLS (Multiple Protocol Label Switching). A point-to-point circuit is a good solution if there are only two sites and no growth is expected. However, if there are more than two sites or plans to expand, MPLS is the absolute best solution.
MPLS circuits take any type of traffic (Multiple Protocol) and prioritize the way different packets (data, voice, or video) are transmitted over the circuit. This is done by tagging each packet with what type of packet it is (Label Switching). For example, if two packets hit the circuit at the same time and want to travel from point A to point B and one packet is voice while the other is data, the MPLS circuit will give priority to the voice packet. This means that you have QOS or COS with MPLS and the voice or video quality is now assured.
The other major advantage is that MPLS allows for fully meshed connectivity for multiple sites. This makes for much easier designs relating to disaster recovery solutions and back-ups. MPLS is a private IP connection and meets all requirements for industry compliance and security and can be run through regular or multiple T-1s or Ethernet.
There are options to get Internet access off of the MPLS connections if you desire, via cloud computing solutions. This means you can share your one connection for multiple accesses and may be a better way to deliver Internet across your enterprise. Alternatively, all remote sites could “home run” back to the headquarters over the private MPLS and hop on a single Internet circuit at that point. Which option is best depends on the design philosophy within your organization.
Finally, we need to briefly touch on the converged circuits that most carriers offer. They can be called different names such as “Converged Circuit,” “Flex Circuit” or “Dynamic Circuit” but they all do basically the same thing — through one connection (1.5 Mbps, 3.0 Mbps, 10 Mbps whatever), they deliver multiple services such as:
- Local calling
- Long distance calling
- Internet access
- WAN connectivity options (ability to connect to other converged circuits in your enterprise)
Bundling the services provides some benefits such as:
- Overall costs are typically lower
- Single point of contact for everything
- One bill to pay
Additionally, depending on the carrier they may offer a bundle of long distance minutes along with the service or greatly reduced calling rates.
These circuits are very viable solutions for small-to-medium sized businesses. One drawback is the fact that if your one connection goes down you lose local calling, long distance calling and Internet access. Therefore, some companies will install one or two analog lines outside or their converged circuit to have a safety net back up capability in the event of that circuit failing.
This discussion was intended as an overview. My hope is that for those learning about their voice and data options as their company grows, it provided a better understanding of some of the options available today.
Subscribe to the MCPc blog to stay up-to-date as we dive deeper into the subject of voice and data solutions, as well as other technology solutions important to today’s growing business.
Frank Marro served as Regional Vice President responsible for sales management in Cincinnati, Dayton and Columbus, Ohio. He also directed MCPc’s national carrier service program, which provides solutions for clients looking for voice, video and data circuits for WAN connectivity.