How to Deploy Smart Data Centre Infrastructure Management

How to Deploy Smart Data Centre Infrastructure Management

Over the past decade, many organisations have deployed Data Centre Infrastructure Management (DCIM) platforms. This is software that captures the availability and performance metrics of the hybrid IT environment such as cloud infrastructure, servers, network devices, power and storage systems. 

These platforms monitor and collate the availability of IT infrastructure and provide utilisation metrics in real-time to help the IT and data centre operations teams manage the environment efficiently. DCIM can also reduce the amount of time used to manage assets and capacity, monitor performance and be proactive with detecting and managing infrastructure faults that cause downtime.

Fifteen years ago, companies selling DCIM platforms sold a great story and many organisations looked at these solutions as the silver bullet to the smart and efficient management of their data centre estates. The organisations made the investment, implemented the solutions and about six to twelve months after the implementation project was completed the reality kicked in: DCIM required a lot of resources and specialist skillsets to administer and maintain the database and keep the systems running. This was hard to manage, and the solution quickly became known as a black art. 

Many organisations went back to their vendor only to discover that more costly professional services would be needed to put things straight. It was difficult to convince the budget owner to spend more on a solution that did not deliver what was expected in the first place.

DCIM technology has taken huge steps in the past few years and many vendors are now offering operational features that can help you streamline and orchestrate your daily IT operations. These new platforms are starting a new era of monitoring platforms known as DNIO (Data Centre Network Infrastructure and Operations) DNIO is the next generation of DCIM and extends the solutions to provide a full suite of operational features. Here is a high-level overview of some of those key features. 

Integration with ITSM systems is a low hanging fruit to achieve quick wins from your technology services teams, some of the benefits are orchestration of asset discovery, database maintenance, and administration,
utilising the platform to detect failures to automate Incident ticket creation, planning adds, moves and changes, seamless updating of the CMDB (Asset Database) gives the operational teams the tools to respond quickly to business demands. Ultimately this keeps your services up and running and helps prevent that dreaded call from your customers telling you they have had an outage.

Automatic Solutions that have the capabilities to integrate with existing systems such as billing, building management, and enterprise resource management will help you maximize the payback and bring your teams together. All will be working from the same platform and have access to the same data, keeping it consistent and error-free.

Before starting an implementation, it’s best to obtain buy-in from other departmental stakeholders and collaborate to build the scope. This should start with day one requirements and developing a roadmap for future implementation phases. Often the IT and facility teams are detached, with their own priorities and completely different regimes when it comes to the management of the environment and the control measures used to minimize disruption in the event of maintenance or failure situations. The trick here is to get your teams talking and working closely together.

Integrate the solution with all aspects of the data centre IT and facility operations to give your teams a holistic view from one central place. The teams can work closely together to ensure the environment is tuned for the best performance, ensuring energy is managed efficiently by monitoring the IT loads and tuning cooling systems. IT infrastructure teams can be notified automatically when a critical facility outage is detected, triggering scripts that can orchestrate emergency operating procedures. This removes the need to rely on humans to make calculated decisions in stressful situations, which increases the risk of human error.

Infrastructure management platforms that support multi-site operations get really interesting. You can now start thinking outside of the data
centre and extend coverage to your offices and manufacturing plants, having the ability to monitor and manage all of the infrastructure from IT to the facility or physical security and IoT systems. This helps your business move into the smart buildings and achieve your green initiatives and reduce your carbon footprint by decreasing energy consumption and greenhouse gas emissions.

At Rahi Systems we have the ability to monitor and manage the full stack, including your metro and wide-area network connectivity that can all be viewed from one platform and a single pane of glass.

Finally, a key part of the success of your implementation is training the people who are going to use it. I have heard and experienced too many stories about staff not knowing how to use DCIM or even understanding why it is there. This results in a wasted solution that is forgotten and left on the shelf and replaced with another.

The biggest challenge is convincing the business to make the investment to improve your operations and mitigate risks. However, if you can gain buy-in from other departments and key stakeholders you will be able to collaborate and build a sustainable business and focus on what you do best, improving efficiency, managing costs and 
maximising the agility of the business.

The next decade of DCIM has arrived. Are you ready to take your organisation to the next level of smart efficiency? 

Get to Grips with Spares Management

Get to Grips with Spares Management

A “spare” can be defined as any asset that is not in production, but that is available to use when needed. Its value is typically an asset on the books. Spares are distinct from “consumables”, which are relatively low-cost items such as patch cords that you buy in bulk and maintain in stock.

Every data center has spare equipment lying around, either as a hedge against downtime or to accommodate rapid growth. Having spares available is smart, but few organisations manage them effectively. This leads to trapped costs, wasted space, and security and compliance risks if assets aren’t disposed of properly.

Spares are sometimes purchased proactively for repair purposes, to bypass lead times, or as a “strategic spare” when you anticipate you may need to add additional capacity fast.

Assets may become spares unintentionally perhaps they were intended for Project X but plans changed or they were bought as a “just in case” but never really used. We also have the scenario whereby spares may have been in production and will need to be replaced by new tech then put back on the shelf as a spare, stacking up even more old tech. 

Then there are those “spares” that are less than useful — obsolete or defective equipment that should be fully decommissioned and disposed of. At a minimum, this equipment takes up space and creates a cluttered data center that’s hard to manage.

However, IT teams seldom think about spares management until finance steps in for an audit or inventory. Suddenly management sees all this equipment and starts counting money. The value of computer equipment this adds quickly turns into 5, or even 6, figure numbers!

What makes it hard to manage spares? For starters, spares are typically offline. That means traditional IT management tools won’t help much. Another complication is that you can’t tell the purpose or history of the spare from observation. It may be a genuine spare all right, or it may be a defective part that was removed from production and put on the shelf “with the other parts.” 

Even if you know that an asset is defective, you can’t necessarily throw it away. Storage media may contain personally identifiable information, and other types of equipment may have credentials and configurations that should not be exposed. Compliance with company policies, laws and regulations such as GDPR makes decommissioning a complicated process. That’s why defective parts are often kept onsite just to be sure.

Proper spares management touches a number of IT and business management processes, including lifecycle management, availability management, financial management, asset management and risk management. Different departments within the organization have a stake in it. As an initial step, you’ll need to define what’s a spare and what’s a consumable. It could be something as simple as a dollar value — everything less than $1,000 is a consumable. Once you have your definition, your DCIM tools and configuration management database can be help simplify spares management. 

To keep spares under control, you will need to review each asset in your data center —racked or unracked — and determine whether its value to the business outweighs the risk and cost of having it there. If it does, keep it. If not, dispose of it in a responsible and documented manner. 

Getting to this state may require an onsite inventory, audit and clean up. Rahi Systems can help with this. And after your spares-management process is started you’ll find that it takes less and less effort with each audit cycle.

A 5-Step Approach to Building an Intelligent Edge

A 5-Step Approach to Building an Intelligent Edge

Many organizations have adopted a cloud-first IT strategy to improve the way they do business. Public, private and hybrid cloud platforms maximize efficiency while allowing IT staff to manage resources, save costs, increase scalability, streamline and coordinate data management, and reduce the human error factor.

But while the cloud has many benefits, it is not always the right fit for modern IT workloads — organizations can experience performance issues when providing services to remote locations. With the growth of the Internet of Things (IoT) and developments in augmented reality and artificial intelligence, many organizations want to extend their cloud to edge data centers that are closer to users and IoT devices. Edge data centers provide faster access to data, enhanced regulatory compliance, and the connectivity needed to collect real-time data that can be pushed into the cloud for analysis.

Building an edge data center can be challenging. In some cases, organizations are faced with implementing an edge data center in a harsh environment with limited space and IT resources. Other times, organizations have an ideal location that allows them to follow traditional data center design practices. In both scenarios, careful planning and execution are required to create a scalable edge data center that supports business needs without impacting existing services or introducing new risks.

Rahi’s engineers have hundreds of years of experience designing, building and supporting data centers that meet customer uptime and SLA requirements. We also look at the bigger picture when choosing technology, with features that support and integrate with existing systems and applications, manage costs, and enable growth of the business.

Our holistic approach is broken down into five steps that are treated individually yet form part of the turnkey solution. The Rahi engineering team meets with each client and carries out a full discovery of requirements for each step. We take time to understand the edge location and existing technology and applications being used, and any preferences for technology vendors and features. This is followed up with a site survey to gain a clear picture of any challenges and obstacles that will need to be considered.

Step 1 – PhysicalThe physical infrastructure is the foundation layer that supports the edge data center. We will review power, cooling and space requirements to support the desired uptime SLA and mitigate risks that could cause outages and impact application or service availability.

Step 2 – SecuritySecurity is crucial to the success and performance of the edge data center. A combination of hardware and software security is needed to protect applications, data and the network from actions or events that could cause serious damage to the organization. Mechanical, electronic or software-defined controls are also needed to prevent unauthorized access to the physical IT environment. Depending on the customer’s requirements, this can be as simple as locks and keys or more sophisticated multifactor authentication with alerting when an event occurs. CCTV systems can be used to monitor the site remotely and provide record retention to support business, security and compliance requirements.

Step 3 – ConnectivityRobust network infrastructure is needed for high-performance, scalable, reliable and secure access to the edge data center. Rahi’s engineers have extensive experience in ensuring that wired and wireless LAN and telecom services support mission-critical applications and edge services. We also provide cabling services and work with innovative technology partners to provide connectivity options to future-proof the network.

Step 4 – Management and DeploymentA secure remote management platform is needed to control the entire technology stack. In most cases, this will need to be a vendor-neutral solution that seamlessly integrates with existing and new systems and addresses the critical needs of networking, in-band and out-of-band IT infrastructure. Because it’s no longer acceptable to dispatch preconfigured equipment containing sensitive information such as user credentials and network configurations, organizations need a management system that enables remote automated deployment of new equipment. Rahi’s cloud-ready solutions allow for a hands-free approach to managing and deploying infrastructure and provide visibility into the entire global IT estate from a single interface.

Having the ability to view all of your assets in a central location gives data centre managers complete control of managing the continuously growing ecosystems, by collecting data metrics from infrastructure and placing into a database that can be manipulated to produce reports to understand resource utilization and assist with capacity planning, understanding growth patterns and generate alerts to help IT teams to detect early warning signals with infrastructure faults and prevent outages happening before they occur.

Step 5 – Compute and StorageWith the huge variety of compute and storage solutions available today, it can be difficult to choose the right platform for various workloads. The edge environment creates an additional set of requirements that must be considered. Rahi’s engineers leverage converged infrastructure and software-defined systems to design an edge data center that is easy to deploy, highly resilient, and capable of operating in the environment dictated by the edge location.

Rahi Systems offer a comprehensive suite of services and solutions focused at edge data centre deployments our passion for engineering drives our team to design and build environments that will enable your business to grow, succeed and overcome bottlenecks that impact productivity. We work closely with the best of breed technology partners and educate our clients with latest innovations and trends in technology, enabling them to choose high quality, reliable solutions that are built to last and enable future proofing, scalability and availability.

Interested in learning more? Why not arrange a meeting with one of our experts to see how we can help you build an edge data center that will overcome your challenges and make your organization more productive.

Software-Defined Application Delivery Controllers Provide Key Benefits

Software-Defined Application Delivery Controllers Provide Key Benefits

In our last post, we discussed how growing workload demands in enterprise data centers requires a new approach to application delivery controllers (ADCs). Traditional appliance-based ADCs are slow and costly to implement, difficult to manage, and lack the flexibility and scalability needed in today’s dynamic data center environment. Software-based ADCs are less expensive because they can be deployed on commodity hardware but otherwise have the same drawbacks as the appliance-based approach.

Software-defined ADCs overcome these limitations. A centralized controller provides automated, policy-based management of a pool of ADCs that can be distributed across multiple environments. This not only streamlines administration, but makes it possible to scale ADC services up or down according to traffic levels.

The controller receives and analyzes a continuous stream of application telemetry data sent by the distributed ADCs. This enables the controller to automatically decide on service placement, autoscaling and high availability for each application. The control plane also monitors the “health” of the system and reacts to data plane component failures or any application changes.

Because the data plane elements are deployed on commodity hardware, they can be spun up or down dynamically wherever services are needed. Active-active high availability configurations can be set up at a fraction of the cost of traditional ADCs.

Software-defined ADCs can also provide advanced functionality, including:

  • Application Performance Monitoring, Troubleshooting and Insights.The most powerful capabilities of the software-defined ADC is the ability to derive rich application insights. Administrators can troubleshoot application issues without poring over log files, and gain analytics on performance, security and users.
  • DVR-Like Function.The system records all the transactional data and plays it back on demand through the central management console. This enables administrators and application owners to view roundtrip times and analyze application issues over time.
  • Policy Repository.Many network administrators have to painstakingly document virtual service configurations (sometimes using spreadsheets) to ensure an accurate representation of their deployed services. The policy repository holds all of the system configurations, including the details of virtual services and pool members.
  • Flexible Consumption Models.The software-defined approach makes it possible to consume local and global load balancing, web application firewall, and other functionality as a service, with Software-as-a-Service as an option.

The Avi Vantage software-defined ADC platform from Avi Networks provides software load balancing and an intelligent web application firewall in a centrally managed, elastic services fabric. It consists of three components:

  • The Avi Controller for policy-based orchestration of multi-cloud application services
  • Avi Service Engines (distributed data plane) that run on x86 servers, virtual machines, containers or in the cloud
  • The Avi Console, which provides automated, self-service provisioning and advanced analytics to drive intelligent decisions

The platform adapts to dynamic environments, empowering IT administrators with next-generation tools for multi-cloud traffic management.  The Avi Controller auto-scales load balancing resources based upon thresholds, and the Avi Service Engine scales out horizontally on demand. Based upon REST APIs, Avi Vantage provides end-to-end visibility and integrates seamlessly with the continuous integration/continuous delivery pipeline for rapid application rollouts.

Rahi Systems is an Avi Networks partner with expertise in application service delivery. Let us show you how the software-defined approach provides the flexibility, scalability and visibility needed for today’s dynamic data center environments.

Overcoming the Drawbacks of Traditional Application Delivery Controllers

Overcoming the Drawbacks of Traditional Application Delivery Controllers

Half of all IT workloads still run in enterprise data centers and will continue to do so through at least 2021, according to the Uptime Institute’s Annual Data Center Survey for 2019. In fact, workload demands in enterprise data centers continue to increase, which can cause performance problems as resources reach capacity. Many data center operators are also spreading workloads across multiple data centers and the cloud to improve resilience, further increasing complexity and risk.

Data center operators use application delivery controllers (ADCs) to provide consistent application services across the data center and the cloud. ADCs perform load balancing to distribute client requests across a pool of servers, maximizing performance and capacity utilization by ensuring that no one server is overloaded. ADCs also typically provide caching, compression and SSL processing to further reduce server load and increase throughput.

ADCs have traditionally been offered as appliance-based or software-based solutions. Appliance-based ADCs consist of proprietary software running on hardware with specialized processors. They require upfront capital investments and are administered manually on a box-by-box basis. Designed in the client-server era, they are unable to scale up and down elastically to meet changing workload demands. Operators tend to overprovision appliance-based ADCs so that they don’t have to wait to buy more hardware to support new applications.

Software-based ADCs are somewhat more flexible in that they typically run on commodity hardware or even in a cloud environment. However, even virtualized ADCs lack the agility, elasticity and distributed architecture needed in today’s dynamic environments. Neither software-based nor appliance-based solutions incorporate security services such as web application firewalls and distributed denial of service (DDoS) protection.

A better approach is to apply software-defined principles to ADCs, separating the control plane from the data plane. This would allow for centralized management of a distributed pool of ADCs. Load balancing functionality could be scaled up or down in response to real-time traffic, accelerating application rollouts and enabling multi-tenancy for internal groups without buying more appliances.

Policy-driven self-service could even allow for automated provisioning of application delivery services for line-of-business applications and dev/test use cases. Roll-based access control would enable internal customers to monitor their applications.

Security services such as dynamic DDoS protection, app isolation and micro-segmentation could be incorporated into the ADC software. Software-defined ADCs could also integrate with software-defined networking protocols, public cloud APIs, container orchestration platforms and DevOps tools.

Service delivery in software-defined ADC architectures is provided by a distributed data plane. The ADCs in the data plane sit in line with application traffic and continuously collect and relay application telemetry data to the controller. The software can be deployed to deliver services close to the application or even on a per-application basis. This approach also enables services for east-west traffic among applications in addition to the traditional north-south transactions between users and applications.

Advances in the processing power of x86 servers have made it possible for software-defined ADCs to provide elastic, high-performance and highly available services at a lower total cost of ownership than traditional solutions. In our next post we’ll dive deeper into software-defined ADCs and take a look at the Avi Vantage platform from Avi Networks.

Imagining the Data Center of the Future Today

Imagining the Data Center of the Future Today

What will the data center look like in 2019 and beyond? That’s one of the most pressing questions facing data center managers as the new year approaches. The IT environment continues to evolve rapidly to support changing business requirements, making it difficult to conceptualize a data center infrastructure that can meet tomorrow’s demands.

Organizations are looking to take advantage of artificial intelligence (AI) and machine learning, the Internet of Things (IoT), and other workloads that weren’t on anyone’s radar a few years ago. These applications require sophisticated new hardware that delivers the highest levels of performance, and a network capable of moving large volumes of data with minimal latency.

In addition to “new,” data center managers must worry about “more” — more workloads, more data, more users, more devices. The IT environment is growing quickly within the data center, at the edge and in the cloud. The need to control capital and operational costs requires a high-density architecture that maximizes the use of data center real estate and power and optimizes resource utilization.

Thing is, few data centers were planned with this degree of change in mind. There are so many things to consider — rack size and configuration, power density and distribution, cooling, and cabling, just to name a few. IT teams tend to base decisions on current requirements with a fudge factor for growth. Those decisions probably didn’t take into account the need for AI servers with GPUs that consume a lot of power and generate a lot of heat, or IoT applications that demand a highly distributed environment.

So, what requirements should data center managers be considering as we enter 2019?

  • A flexible data center infrastructure that can facilitate change and support workloads at the edge
  • Cooling solutions that optimize airflow to enable higher power densities while keeping a lid on costs
  • Power distribution solutions that can accommodate various types of hardware
  • Automation tools that streamline workflows and provide the data IT teams need to better manage the data center environment
  • “Future-proof” cabling solutions that enable bandwidth upgrades and changing architectures without costly “rip and replace”
  • Interoperable systems that maximize utilization and enable consolidation
  • Expertise in the design, architecture, deployment and validation of cutting-edge IT solutions
  • Provisioning processes that enable the rapid rollout of new applications and services anywhere in the world

Rahi Systems offers a suite of solutions and services that help data center managers meet these requirements. From our FlexIT infrastructure products to industry-leading compute, storage, networking and A/V solutions, we are your one-stop resource. Our experts have the knowledge and experience to help you design an agile data center environment and select the right products to meet your business and IT needs. Our distribution and logistics capabilities ensure that these solutions are delivered to the right place at the right time around the globe.

What does the data center of the future look like? That’s difficult to predict, but Rahi Systems is here to help you address today’s data center challenges and position your organization for the future.

The Rahi blog will take a brief hiatus for the next two weeks and resume after the first of the year. We wish you Happy Holidays and look forward to serving you in 2019.

How DCIM Tools Facilitate Data Center Asset Management

How DCIM Tools Facilitate Data Center Asset Management

IT asset management has never been easy, and it becomes exponentially more difficult as data centers increase in complexity. However, many organizations continue to use pen, paper and spreadsheets to track IT assets. These manual processes cannot keep pace with the rate of change in the IT environment.

Manual processes are also error-prone. According to the International Association of IT Asset Managers, the average organization can expect an error rate of 15 percent or more when manually tracking IT assets. Manual data entry alone comes with a 10 percent error rate due to transcription and typographical mistakes.

Thank about the ramifications of these statistics. In a data center with 1,000 assets, 150 may be missing, lacking warranty coverage or due for upgrade. The data center manager has no way of knowing because these assets haven’t been accurately tracked and monitored.

Worse, data center managers lack the visibility they need to identify underutilized and even unused systems and virtual machines (VMs) that should be consolidated or decommissioned. Physical servers often operate at less than 12 percent capacity, particularly when running a single application. A VM costs about $6,000 a year to operate, according to Gartner, and orphaned VMs that aren’t effectively managed are prime targets for cyberattacks.

Eliminating waste and reducing the attack surface should obviously be a high priority. But when there’s no trusted source of information about asset ownership, interdependencies and utilization, data center managers can’t make effective decisions regarding these systems. Conducting periodic manual audits is time-consuming and painful, and provides only a point-in-time snapshot of a dynamic environment.

Data center infrastructure management (DCIM) solutions offer relief from these headaches. In addition to monitoring and collecting data on the physical data center environment, DCIM solutions include IT asset discovery and management capabilities that facilitate the tracking of IT equipment across its lifecycle. This enables data center managers to make more-informed decisions regarding deployment, maintenance and operation.

Of course, effective IT asset management is more than just a ledger of the equipment that exists within the environment. DCIM tools provide ongoing monitoring of the environment, giving managers greater visibility. They also include change management capabilities that aid in the planning and tracking of moves, adds and changes. This minimizes the risk of human error and also aids in capacity planning. The integration of IT asset management with environmental monitoring helps managers understand how much power existing equipment is consuming and how many more devices the facility can handle.

Simulation capabilities enable data center managers to analyze the potential impact of planned maintenance, system failure and changes to the environment. What would happen if a particular device were taken offline? Would other systems and services fail as a result? This not only helps managers configure alerts and escalations but allows them to more confidently plan for consolidation and upgrades while maintaining SLOs and SLAs.

Rahi Systems offers industry-leading DCIM solutions backed by the knowledge of our data center infrastructure experts and IT solution architects. We can help you get a better handle on your data center assets so that you can optimize your environment, enhance security and reduce costs and risk.

5 Winning Ways to Optimize Data Center Performance

5 Winning Ways to Optimize Data Center Performance

It’s the start of the 2018 college football season, and student athletes across the country are working hard to optimize their performance. Proper diet, exercise and training help ensure they have the strength, stamina and skills to help their teams win games.

Data center managers are also concerned with sustaining peak performance and should develop a program for continually evaluating and fine-tuning the IT environment. Here are five strategies to include in your data center optimization plan:

  1. Use the right cooling system. Data center densities are increasing as organizations look to pack more IT resources in a smaller space and take advantage of artificial intelligence, big data analytics and other advanced technologies. Higher power density means more heat, and legacy cooling systems are often unable to keep up. In-row cooling technology can maximize efficiency by placing the cooling unit directly in the row of racks and cabinets. The cold air is focused more directly on the equipment so that heat can be dissipated faster.
  2. Monitor the environment. As the saying goes, “What gets measured gets managed.” Data center infrastructure management (DCIM) solutions collect real-time data on utilization and energy consumption to facilitate decision-making. They also provide analytical tools that enable operational teams to better manage the environment and ensure the performance and availability of IT systems.
  3. Take advantage of automation. Given the ever-increasing demands on the IT environment, many IT operational teams are struggling to keep up with ongoing maintenance and management tasks. Automation can help relieve the pressure and ensure that best practices are followed. Automated change management tools have become especially valuable for minimizing risk in today’s dynamic environment.
  4. Maximize flexibility and scalability. Data center managers need to ensure that the environment can easily accommodate growth and changing requirements. The right infrastructure components are critical. “Pod” units that house multiple racks and include built-in aisle containment make it possible to add data center space quickly in virtually any location.
  5. Align budget with business requirements. In many data centers, up to 80 percent of the budget still goes toward “keeping the lights on,” leaving just 20 percent for innovation. In today’s hyper-competitive business environment, organizations need IT to focus more resources on innovative solutions that increase productivity, enhance customer service and enable new business models. Data center managers should take a hard look at their operational processes and consider outsourcing tasks that are resource-intensive or require specialized skill sets.

Rahi Systems has designed and implemented data center environments for some of the world’s largest enterprises. We specialize in solutions that optimize performance and efficiency while reducing complexity and operational overhead. We also offer a comprehensive suite of professional and managed services that cover every aspect of data center operations.

We invite you to take advantage of our knowledge and hands-on field experience to guide your IT decisions. Let us help you develop a winning strategy that will get your data center environment in tiptop shape.

Why the IoT Needs Out-of-Band Management

Why the IoT Needs Out-of-Band Management

The Internet of Things (IoT) promises to revolutionize entire industries through greater efficiency, enhanced customer service and improved decision-making. Forward-thinking organizations are also developing new products, services and business models that leverage Internet-connected devices that gather data and operate autonomously.

Imagine retail store shelves that send an alert to the warehouse when they need to be replenished. Smart buildings that automatically control lighting and temperature based upon room occupancy and ambient conditions. Fleet vehicles that calculate the best route, and machinery that schedules its own maintenance.

But for all its promise, the IoT has introduced some confounding problems. Organizations are finding that a centralized IT environment creates unacceptable latency when it comes to analyzing IoT data. As a result, more and more computing resources are being pushed to the network edge, closer to users and IoT devices. This creates a far-flung network infrastructure that’s extremely difficult for IT teams to manage.

The latest out-of-band management solutions address this challenge. Originally designed to give IT personnel access to devices when the primary (in-band) network is unavailable, out-of-band management has evolved to support advanced infrastructure management and automation tools. Out-of-band management also gives IT personnel access to equipment that has not yet been configured, enabling zero-touch provisioning of IoT devices and edge systems.

Nodegrid Bold SR from ZPE Systems is a compact form-factor appliance that provides resilient out-of-band management for the edge and IoT. Secure access and control of IT and IoT devices enables a virtual IT presence at the edge of the network. IT support staff can manage devices remotely, simplifying administration and troubleshooting while reducing staffing and travel costs.

The software-defined networking (SDN) capabilities of Nodegrid Bold SR provide a centralized view of infrastructure assets, and allow for automation and policy-based orchestration of network services and applications. Like all Nodegrid solutions, Bold SR features enterprise-grade security protocols and encrypted data transit.

Nodegrid Bold SR leverages network functions virtualization (NFV) to provide routing, firewall, system monitoring and VPN capabilities. NFV saves physical space and cuts hardware costs through the virtualization of physical network appliances. It also speeds deployment because there’s less hardware to install, and provides operational cost savings through reduced power consumption and cooling.

Redundant connectivity methods with automatic failover reduces downtime. Nodegrid Bold SR includes two 4G/LTE SIM slots, Wi-Fi capabilities and optional USB modem support to provide superior availability to devices deployed in pods, retail stores, remote offices and other edge locations.

Nodegrid Bold SR supports a variety of connection methods, including serial, network and USB, and provides in-band and out-of-band remote access and power controls. Bold SR also offers device health monitoring, alert notifications and actionable data capabilities.

The IoT is compelling organizations to implement micro-environments at the network edge to provide compute and storage resources for a wide range of devices. IT teams need a new set of tools that enable remote monitoring and management of these edge environments. Nodegrid Bold SR extends out-of-band management capabilities to the IoT, and provides reliable, secure and efficient connectivity for edge networks. Contact Rahi Systems to discuss how out-of-band management solutions from ZPE can support and enable your IoT strategy.

Data Center Commissioning Saves Time and Money

Data Center Commissioning Saves Time and Money

There’s a concept in building design and construction known as “commissioning.” Most of us understand the word “commission” to mean a formal request for the building or making of something. A wealthy patron might commission a painting from an artist, for example. However, there’s also a second meaning of “commission”: to bring into working condition something newly produced, such as a factory or machine.

When it comes to, say, an office building, commissioning validates that everything is in working order so that the tenants can move in. It also confirms that all building systems are operating as efficiently as possible, and that monitoring systems are in place to alert maintenance personnel whenever a problem occurs. The idea is that you want to identify excess energy usage, poor indoor air quality or potential safety concerns as soon as possible, before they become major issues.

Data centers also are commissioned, but the process is somewhat different. Yes, commissioning needs to ensure that all building systems are working properly, and that extends to computer room air conditioning (CRAC) units and chillers, uninterruptible power supplies (UPSs), emergency generators, and other data center infrastructure. In addition, data center commissioning must make sure that systems fail correctly. For example, when the electric grid goes down, the UPSs must support the IT equipment until the emergency generator comes on.

A key function of commissioning is to establish a schedule of tests to be performed at various phases of the data center implementation process. If equipment vendors are to conduct the tests, it’s important to verify that the testing procedures will in fact prove that the equipment will perform as required. In addition, integrated systems testing should be performed to confirm that all components of the data center infrastructure work together.

Formal documentation should be completed as part of the commissioning process, including final specifications, as-built drawings, test scripts and results, and other records. Commissioning can also aid in the development of “runbooks” with detailed information about the site and standard operating procedures. Operations and maintenance staff should receive formal training on how to maintain the various building systems and ensure data center availability during planned and unplanned outages.

Although commissioning may sound like an extra set of steps in an already complex process, it actually increases the likelihood that the data center will be completed on time and within budget. Commissioning can help catch potential problems early so they can be corrected before costly and time-consuming rework is required.

Commissioning also saves money in two ways. First, it helps to ensure that mission-critical systems are operating efficiently and to streamline ongoing maintenance and management of the facility. More importantly, it reduces the risk of a data center outage, which costs nearly $750,000 on average according to data from the Ponemon Institute.

Rahi Systems offers a comprehensive array of data center infrastructure services, including assessment, engineering, design, specification, installation and project management. Our seasoned team of professionals can assist you with validation and testing, or work with third-party providers to ensure that everything is functioning as required. We are here to help maximize the efficiency of your data center and minimize problems down the road.