Hadoop would not have become synonymous with “Big Data” had it not been for the pioneering work and marketing efforts of companies such as MapR, Cloudera and Hortonworks. Each of these organizations made the concurrent use of a number of ASF distributed, component-based services accessible by bundling these services in a deployable servicestack with a centralized management component.
Today, almost a decade since the commercial introduction of Hadoop, many organizations are using and managing the services of Hadoop along with those of other ASF components such as Pig, Hive, Scoop, Mahout, Flume and more
Many ASF services are complementary in function and the necessity to use of these services concurrently caused the vendors to provide a management component to the servicestack.
This management component enables the centralized provisioning, management and monitoring of the topology of services. While some stack vendors first chose to make this management capability proprietary, others chose to support a servicestack of ASF component services via the open-source Apache project specifically for centralized management of ASF and non-ASF
Most “Hadoop” consumers today are of the belief that they are not able to define a customized stack of services on their own. While this restriction may be due to the decision to agree to a stack vendor’s support agreement, creating a customized stack of services is one of the tenets of Ambari and the open-source community.
With vendor-defined and supported service stacks, many of the services go unused or it causes the service stack to be seen as the “hammer” and therefore all the applications created have to be designed as a “nail,” that is, designed specifically to use only the services provided by the servicestack vendor.
There are over 350 ASF projects available to create a service stack specific to the needs of an organization. One of those projects, Ambari, was designed to enable the creation of custom service stacks. The benefit of a custom stack is two-fold, only the services needed are in the stack and the complete repertoire of all 350 projects can be viewed as potential candidates as members of the stack.
This course is a Do-It-Yourself instruction guide on how to install, manage and support a custom, centrally-managed ASF service stack. The student will be shown how to configure the ASF Ambari project to create a custom stack of managed ASF services. The student will learn how to configure Ambari’s behavior toward each service that is added to its stack. This behavior includes how Ambari displays the main and ancillary web pages for the service and its worker services, what alerts the Ambari agents should provide about that service to the service pages and other behaviors necessary for the complete administration of each service selected to be in the Ambari service stack.
This 4 day course covers the technical aspects a developer, architect or DevOps individual will need to know to install Ambari and to configure Ambari to support a custom service stack.
Throughout the class, the student will learn how to use Ambari for centralized service administration, management and monitoring of the service stack. The course will conclude with a detailed approach to ways an organization can build an efficient support system for its stack of services by utilizing the abundant, no-cost resources from the ASF community.
DevOps experience with Linux is a prerequisite. Understanding of the tenets of the Apache Software Foundation is necessary, and as one of the services that will be installed in the Ambari stack during lab time will be Hadoop, knowledge or experience with Hadoop will be helpful. It is suggested that a student new to Hadoop first take the DFHz course “Advanced Hadoop.”
DevOps individuals, architects and anyone needing to define, manage and support a custom ASF component service stack
This is a 4 day class when taught on-site with ILT or via web-ex with VILT. It is also offered on a per-module basis for on-line self-enablement via our LMS, Brane.