Spokespersons of NetApp, Nutanix, Lenovo and Rubrik talking about the present, and the future, a market valued at 2,000 million dollars.

The infrastructure systems hiperconvergente gather the necessary ingredients to bring scalability and elasticity to enterprises, replacing the solutions from traditional IT. To talk about your possibilities, NetMediaEurope has brought together four experts from the technology industry: Jaime Balañá, Technical Director, NetApp; Alberto Carrillo, Sales Engineer Manager of Nutanix; Manuel Diaz, Solution Sales Executive of Lenovo; and Miguel Angel Serrano, Sales Engineer Rubrik

These professionals have been the protagonists of the meeting online “2018, the year of the infrastructure hiperconvergente”. In a conversation moderated by Monica Valle, journalist specialized in IT, the four have analyzed the state of the hiperconvergencia. They speak of their growth, condition and standard of its advantages. The type of companies that can benefit from its implementation. The phenomenon of the edge computing. The role of the software. Work with the data. Of the security. And technologies such as NVMe and future trends.

A growth of 55 %

forecasts from Gartner suggest that this 2018 may be the year of infrastructure hiperconvergente, whose sales are growing at a rate of 55 % in comparison to 2017. And the increase will not stop there. It is estimated that in 2019, this market will rub 5,000 million dollars. In these moments your value is 2,000 billion. Contributing to this are factors such as agility. “The numbers speak a very clear. It is clear that the market has grown a lot,” says Jaime Balañá, Technical Director NetApp, who explains that clients are looking for “simplicity”.

“For many companies the IT infrastructure is as a kind of obstacle that they have to overcome to reach your goals and the infrastructure hiperconvergente makes that obstacle is quite a bit less”, adds Balañá. Alberto Carrillo, Sales Engineer Manager of Nutanix, agrees that “the main objective is to eliminate that complexity”, which considers “the asset more expensive than we have right now in the data centers”. What you see in the same way Manuel Diaz, Solution Sales Executive for Lenovo, for those who “forget a bit of managing your infrastructure and devote a little more to the business is a very good opportunity”.

The clients are “bombarded by such things as the machine learning, the artificial intelligence, Big Data” and others, but you can see them “tied”, says Miguel Angel Serrano, Sales Engineer Rubrik. Perhaps “the infrastructure will not let them move forward, is not agile, it is not simple, does not scale as it should be.” Right here comes into play the hiperconvergencia, that “solves in a very simple way all of this problem”, that has come “to stay” and that “it’s going to replace a large part of the assets” in the CPD, as described by Serrano.

what can Be considered as a standardized model? Alberto Carrillo is of the opinion that yes “it is something that is standard” or, at least, we’re on the path of it. He has gone from going to “explain the solution” to the clients themselves to make a function “word of mouth”. In those moments, “the work of evangelization that you are doing is more about what is a cloud business” with the option of that “infrastructure is a means to actually get the experience of a public cloud but in the data center”. Jaime Balañá account that “the cloud hybrid is the strategy that they are following the majority of the customers.”

In your case, Manuel Diaz notes “two-speed to take hiperconvergencia. The typical customer, small and medium”, abundant in Spain, has been the first to recognize “the values of this”. While, among the “big customers” with “different areas of infrastructure” would be “costing something more” for a reason “culture”. There are clients that “it costs them to put it all in hiperconvergencia”, also points out Balañá, “for performance issues”. In fact, “I think that’s always going to be those concerns and that’s going to be some workloads that will not or even is not going to get to the hiperconvergencia”, adventure, the technical director of NetApp.

In “the hiperconvergencia our enemy is the status quo,” says Alberto Carrillo. In any case, Miguel Ángel Serrano, believes that, “in the end, you are going to reach all levels of our data center: from the application itself, where are our developers going through all the infrastructure, until you reach the last point, that is going to be the last line of defense, which is the backup”.

Both Serrano as Diaz point to the smaller companies as those that derive return on investment more quickly. is “The idea”, reminds the Sales Engineer Rubrik, “is that the directors of the companies are not engaged to manage the infrastructure, but be able to manage the services that run on that infrastructure. That way, we will achieve that the services grow” and “companies can offer things differential in your business versus the competition”. In organizations with “a group of management of the IT more compact” is “much more easy. No longer need to be an expert of storage, an expert network, an expert of calculation, not even an expert… because one of the good things of the solutions hiperconvergentes is the ease of management,” says Diaz.

“changes a little the concept of managing infrastructure to consuming the infrastructure”, it compares the Solution Sales Executive of Lenovo, which is like “advantage” of “the large companies on small the” fact “that, once they adopt this kind of models, can deploy workloads also in different ways”. To them, “set up infrastructure hiperconvergente of the that they forget to manage and to worry about to mount a cloud corporate on her and put his maximum efforts and their resources, there is also a big advantage.”

Edge computing , software, data platforms, security and NVMe

Beyond the data center, how does hyperconvergence fit in edge computing technology? Rubrik’s spokesman details that «the edge computing consists of distributing the charges» without «we disperse them» and without «losing control of them» . It seeks to “distance them a bit from what the infrastructure core would be but, in turn”, that “everything behaves as a single system”. This is something that «traditional systems, more physical systems, are not» are able to «get,» Serrano explains, being «designed to function more or less isolated. The only way to achieve this is by using new technologies ”without“ command lines to use. ”

Alberto Carrillo talks about solving «four problems», which have to do with «physical infrastructure», «applications when they are heterogeneous», «storage» and «management». With «a single console,» says the Nutanix representative, «infrastructures that are» replicated become one and that is the simplicity that really makes some edge «projects that» no they were viable ”if“ enormous amounts of effort were not invested, and with relative resilience ”, they can already be“ assumed ”.

Computing, storage, connectivity … now everything is defined by software. What relevance does it have? What is the role of software in hyperconverged environments? Performs «the most important», according to Jaime Balañá. The different solutions that exist «do not have much difference in» in terms of «physical hardware», so «all the advantage» , says Balañá, «is in the software precisely. It is where the difference is. ” The NetApp manager takes into account both the software for “management” and “how the architecture of the platform is made.”

«Agility is the fundamental property» provided by the software-defined infrastructure , according to Manuel Díaz. That is, «the ability to have what you want when you want and the way you want,» as well as the «ability to scale,» «to make life for IT people much easier than it was until now.» . Miguel Ángel Serrano chooses «adaptability» as the key point. “The software is much more adaptable than the hardware. Hardware is what it is, it is iron ”and“ without it you cannot do anything, but what gives you the software is the ability to adapt, ”he distinguishes. “With a software-driven infrastructure this is much simpler. Directly we move what is needed, we reprogram what we need ”and“ the part of the infrastructure is ready for the service to start running on what we are managing ”.

It all comes down to a job as efficient as possible, where data platforms come into play. How important are these platforms in managing and protecting information? «The data must be protected at all times and the hyperconvergence computing solutions protect it perfectly,» says Serrano. Rubrik Sales Engineer values ​​that «they are able to make replicas» and «to mount disaster recovery » systems. And it states that «the data always» ends «in a repository of backup «. For Alberto Carrillo, “lsafety is key,” and is “one of the things that allows you to have an automated exploitation of the infrastructure and automated the whole cycle of life”. Because “you have a substrate that allows you to implement policies”.

In terms of innovations that are being introduced, one of them is NVMe. On the basis of the definition of Jaime Balañá “is a step more than all the world is going to adopt if you have not already done so. is which allows you to NVMe is to eliminate one of the typical bottlenecks” for “access to data that were stored on a disk”. And is that “to do anything”, recalls the Technical Director NetApp, using “a protocol that was about 30 years old”. With NVMe as a “natural evolution” gives “a performance so very much more high, which in the end is a benefit to the customer, so you can do more things with the same infrastructure”. Balañá is satisfied that “at the end of the hiperconvergencia going for it: do more with less, or with less problem.”

Alberto Carrillo stresses that this technology “is the infrastructure, iron… but” however “is disruptive”. Is something like this: “as a GPU storage”, defines. Similarly, Manuel Diaz mentions that “the physical infrastructure is the same for hiperconvergencia than for traditional servers,” and resolves that the invention of NVMe solves the problem of speed in storage. “That, associated to the costs that it already has, what makes it completely feasible for the current technology and to have it on the market. About a year ago that it was unfeasible,” says the executive of Lenovo, “today is a choice”. Yes, a “face” even for the backup, according to Miguel Angel Serrano.

All that is come

If the forecasts are met, the hiperconvergencia will continue to improve his numbers in 2019.What to expect “The adoption of new technologies such as NVMe is one of the things we hope will become more common,” says Jaime Balañá, who also remembers “ storage-class memory , which is the new technology of disks based on non-volatile memory ”which will reduce“ further latency of access to that data. ” What still has to come is the“ integration with other software or technologies which are being very common in the business environment ”, in the case of“ containers ”,“ new hypervisors ”and“ public cloud ”. Alberto Carrillo is another one who claims for hyperconvergence that he starts from “the services we already have” to “complement them with additional services”.

Manuel Díaz equates the situation of hyperconvergence «with what happened with virtualization» now that «there is almost nothing left of non-virtualized x86». And Miguel Ángel Serrano also bets on «hyperconvergent environments» as «the future» or «rather the present», as «the basis for practically everything in the coming years» .

Here you can see the event again.