Learn
This section guide you through the process of deploying and operating an Hadoop-based Big Data platform with TDP. Whether you’re a beginner or have some experience with big data technologies, this section provide you with the knowledge and tools to set up your own scalable and powerful data processing infrastructure.
Before we begin, let’s have a brief overview of what TDP is and why it is used for big data processing.
TDP is an open-source platform designed to store and process large volumes of data across distributed clusters of commodity hardware. It is based on the Hadoop ecosystem. It provides a reliable and scalable solution for handling the challenges posed by big data, such as storing and processing massive datasets efficiently.
Throughout those pages, we cover many interesting topics, including:
- Understanding the components of TDP, including the Hadoop Distributed File System (HDFS) for storage, YARN for scheduling, Ranger for fine-grained authorisation and Knox for perimeter security.
- Setting up a TDP cluster by installing and configuring the necessary software components. Configuring data replication and fault tolerance mechanisms to ensure high availability and reliability of your data.
- Exploring the ecosystem of tools and frameworks that complement Hadoop, such as Apache Spark for fast data processing and Apache Kafka for real-time streaming data.
- Learning best practices for managing and monitoring your TDP cluster, including the REST API and cluster management tools like TDP CLI and TDP UI.