Previously I wrote about the difference in technology today compared to when MQ first came out. One of the areas that is most notable is network speed and how that relates to I/O as well as reliability. Not that long ago, you probably were saving changes to your Word or PowerPoint documents every few minutes just to make sure you didn’t lose a days’ worth of work in case of a problem. OK, let’s be honest some of us did lose a days’ worth of work because we forgot to save often enough. When IBM MQ first came out, in order to provide guaranteed delivery it also had to deal with the high volatility and slow speeds that were available. As such it had to be very frugal in its use of resources as well as making sure that any changes were hardened.
Today, Word and PowerPoint are constantly saving your work in the background so if something bad were to happen you are almost up-to-date and it’s less common that something does. So it’s no surprise that if you compare the recovery strategy for Kafka and the recovery strategy for MQ there are a lot of differences. Kafka makes assumptions about the environment that would not of been possible even five years ago.
For example, Kafka leverages multiple concurrent replication of data at very high speed. Which in turn, are required in order to provide data integrity. These techniques leverage high speed networks, and network attached storage that is a magnitude faster than the fastest local storage devices which existed not that long ago.
IBM MQ has continued to modernize its facilities to take advantage of technology changes. For example multi instance Q managers were created which relied on network attached storage but came with restrictions. RDQM was introduced which removed many of those restrictions but was only available for Linux systems. I can envision that other recovery options will be available in the future, in order to be on par with the modern systems.
But these trends have also created new challenges. Since the components are distributed across a number of nodes and move dynamically, visibility requires tools that have been modernized and can handle the dynamic nature of this environment. Nastel’s products have constantly been innovating to make sure that’s you can do your job no matter how complex the underlying infrastructure is.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics