Log4j gets added to the code “wall of shame.”
It seems that every few weeks, we are alerted to a new significant security issue within one of the plethoras of code elements that are widely used. The same pundits discuss the same range of concerns with open-sourced code each time.
- “It’s because people are not paid to develop this code; if only they were paid fairly, then the code would be more rigorously tested.”
- “It’s because companies don’t put enough effort into testing their solutions before going into production.”
- “It’s because legacy features are maintained for backward compatibility, and this leads to increased risks.”
The list of “usual suspects” is long, and I know I could add at least 20 additional “reasons” to this list without thinking about it too hard.
I’m not sure that open-sourced code is riskier than proprietary developed code.
There I said it. When you have code used by millions of developers in all kinds of scenarios, you have a form of evolutionary testing that is hard to replicate in any other way.
When a company builds code from bare metal, they have a level of control. Still, the cost and time involved are so much more significant, and the amount of testing across different use cases will always be more limited to their specific expected uses.
On the other hand, teams of students and amateurs mixed with experienced professionals doing extra work in their spare time to create something, the outcome can be quite impressive (but not always), and the flame arguments and reports of bugs from hundreds/thousands/millions of users, does expose a lot of subtle issues that can then be worked around or solved in updates.
Without the concept of open-source, the rate and pace of development would look very different. It’s incredible to see how the brainpower of a significant proportion of the human race is being used to benefit all (capitalism and socialism end up being the same thing, it seems).
There are some impressively rigorous standards continuously being developed and applied to test code and certify it for different levels of availability and security.
Now, companies are starting to also consider the importance of performance much earlier in their architecting and development cycles. Along with security and availability testing, benchmarking performance is also a powerful way of exposing cost risk to your business.
Recently we published a performance benchmarking reporting that considers different integration infrastructure (i2) messaging middleware solutions being used with varying workloads on other environments. This adds data to an area of testing that can help architects and developers choose the most appropriate method for their specific needs. You can find this paper here.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics