A single fabric infrastructure is increasingly considered the optimal solution for network build-outs, not only high-performance computing environments, which is predominantly InfiniBand-based, but also in mainstream enterprise grids, and in datacentre enterprise server, storage, and virtualised environments. The low-latency, high-performance, and efficient CPU utilisation provided by Mellanox InfiniBand and Ethernet solutions, coupled with the economic benefits of consolidation, performance boosts, manageability, and network virtualisation has helped end-customers build out their applications in the most cost-effective manner. Mellanox networking solutions, which can be based on InfiniBand or Ethernet, are uniquely positioned to satisfy such networking needs.
Learn how Mellanox's technology can take your solution to the next level of performance, power, and cost.
Mellanox CloudX™ is a group of reference architectures that allows companies to build the most efficient, high performance and scalable clouds based on Mellanox's superior interconnect and off-the-shelf building blocks (servers, storage, interconnect, and software).
Building an enterprise-ready cloud is a complex task and requires appropriate sizing and integration of hardware and software to optimise usability, scalability, performance and cost-efficiency. For this reason many attempts to shift to modern cloud architectures fail. The CloudX platform incorporates virtualised compute, scale-out storage, with high-bandwidth and low-latency interconnect. This integration not only significantly reduces the risk of deployment but also maximises ROI and lowers TCO.
Mellanox datacentre networking solutions based on Virtual Protocol Interconnect (VPI) technology enable seamless connectivity to 56Gb/s InfiniBand or 10/40 Gigabit Ethernet (GbE) connections, or a mix of both, depending upon your networking requirements. VPI enables I/O infrastructure flexibility and future-proofing for datacentre computing environments. It facilitates all standard networking, clustering, storage, and management protocols to seamlessly operate over any converged network with the same software infrastructure. Mellanox 10GbE/40GbE solutions provide lower cost, power, latency, and CPU utilisation for Ethernet-based solutions for blade and standard rack and tower environments. By utilising 56Gb/s InfiniBand or 10/40GbE/Fibre Channel over Ethernet (FCoE) to consolidate I/O to a single wire, cloud providers and IT managers can deliver significantly higher application service levels while achieving their business goals of increased productivity and reduced CAPEX and OPEX related to technology I/O spending.
With proven scalability and efficiency, small and large clusters easily scale up to thousands of nodes. By providing low-latency, high-bandwidth, high message rate, transport offload to facilitate extremely low CPU overhead, Remote Direct Memory Access (RDMA), and advanced communications offloads, Mellanox interconnect solutions are the most deployed high-speed interconnect for large-scale simulations, replacing proprietary or low-performance solutions. Mellanox's Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency, and performance for HPC systems today and in the future.
As new, faster storage technologies have evolved, the storage bottleneck has moved from the storage media to the I/O interconnect. While interconnect technologies have sped up considerably, they commonly remain the limiting factor in increasing the performance of many datacentres. Critical to removing this bottleneck is a new way of looking at storage interconnects. Speed counts, but the path data takes though the interconnect can drastically change the performance of a datacentre. Mellanox Virtual Protocol Interconnect (VPI) and storage solutions eliminate the storage bottleneck and provide unprecedented storage infrastructure performance with lower costs and complexity compared to traditional storage networks. This translates to real-world customer advantages including optimised server utilisation, increased application performance, reduced back up times, increased datacentre simplicity and consolidation, lower power consumption and lower total cost of ownership (TCO).
Big Data analytics used to be the exclusive domain of big enterprises. Now, leveraging parallel processing technologies, the challenge of quick analysis of massive data that streamed-in from different sources at a very high speed is going mainstream. Parallel processing architecture breaks a big data analytics job into smaller jobs and runs it over tens, hundreds, or thousands of commodity servers, which keeps datacentre costs low and provides easy scalability.
Web 2.0 is defined as providing Infrastructure as a Service (IaaS) to users for their personal and professional use. The current Web 2.0 infrastructure is built using commodity hardware which keeps datacentre costs low with scalability for the future. These trends have existed in the HPC industry for a long time already, but the recent emergence of massive network access devices has led to a new services-oriented infrastructure which requires reliable networking infrastructure. A lower latency and higher throughput network infrastructure enables datacentres to increase their hardware utilisation and to scale up to massive capacities in an instant without having to worry about bottlenecks and bad user experiences. Web 2.0 represents a new tipping point for the value of network computing.
IT departments in financial capital markets are facing huge growth in the volume of market data, while they are also under pressure to improve trade execution times. In these environments, one extra millisecond of trade execution latency can mean as much as $100 million in lost trades per year. As these datacentres move to multi-core, multi-processor servers and high-speed storage systems, IT managers should make sure that the connections among servers and between servers and storage are not creating a bottleneck that impedes optimum trade execution performance.