1. Introduction

Genie is a complicated service. It can be hard to understand the value it brings to a data platform without seeing it in action. For this reason this set of demo steps exists to show how Genie fits into a data platform and how it can help both administrators and users.

For high level concept documentation please see the website.
For high level information and installation instructions please see the Reference Guide.
For documentation of the REST API for this version of Genie please see the API Guide.

2. Info

2.1. Prerequisites

  • Docker

  • Docker Compose

  • Memory

    • Probably at least 6 GB

  • Disk Space

    • About ~5.5 GB for 5 images

  • Available Ports on your local machine

    • 8080 (Genie)

    • 8088, 19888, 50070, 50075, 8042 (YARN Prod Cluster)

    • 8089, 19889, 50071, 50076, 8043 (YARN Test Cluster)

    • 9090 (Trino Cluster)

2.2. Development Environment

For reference here are the machine specs that this demo has been tested on

  • Mid-2018 MacBook Pro

    • MacOS Catalina 10.15.5

  • 2.9 GHz 6-Core Intel Core i9

  • 32 GB 2400 MHz DDR4

  • Docker Desktop 2.3.0.3

    • Docker Engine 19.03.8

    • Docker Compose 1.25.5

    • Preferences

      • 6 CPUs

      • 6 GB RAM

      • 1 GB swap

2.3. Caveats

  • Since all this is running locally on one machine it can be slow, much slower than you’d expect production level systems to run

  • Networking is kind of funky within the Hadoop UI due to how DNS is working between the containers. Sometimes if you click a link in the UI and it doesn’t work try swapping in localhost for the hostname instead.

2.4. Port Usages

Table 1. Genie Endpoints
Endpoint URL

UI

http://localhost:8080

API

http://localhost:8080/api/v3/

Actuator

http://localhost:8080/admin

Table 2. Hadoop Interfaces
UI Prod URL Test URL

Resource Manager

http://localhost:8088

http://localhost:8089

Job History Server

http://localhost:19888

http://localhost:19889

NameNode

http://localhost:50070

http://localhost:50071

DataNode

http://localhost:50075

http://localhost:50076

Container Logs

http://localhost:8042

http://localhost:8043

Table 3. Trino Interfaces

Endpoint

URL

Web UI

http://localhost:9090

2.5. Scripts

Table 4. Admin Scripts
Script Name Invocation Purpose

Init

./init_demo.py

Initialize the configuration data in the Genie system for the rest of the demo

Move Tags

./move_tags.py

Move the production tag sched:sla from the prod cluster to the test cluster

Reset Tags

./reset_tags.py

Move the production tag sched:sla back to the test cluster from the production cluster

Table 5. Job Scripts
Job Invocation Action

Hadoop

./run_hadoop_job.py {sla|test}

Runs grep against input directory in HDFS

HDFS

./run_hdfs_job.py {sla|test}

Runs a dfs -ls on the input directory on HDFS and stores results in stdout

Spark Shell

./run_spark_shell_job.py {sla|test}

Simply prints the Spark Shell help output to stdout

Spark Submit 2.4.x

./run_spark_submit_job.py {sla|test} 2.4.6

Runs the SparkPi example for Spark 2.4.x with input of 10. Results stored in stdout

Spark Submit 3.0.x

./run_spark_submit_job.py {sla|test} 3.0.0

Runs the SparkPi example for Spark 3.0.x with input of 10. Results stored in stdout

Trino

./run_trino_job.py

Sends query (select * from tpcds.sf1.item limit 100;) as attachment file to Trino cluster and dumps results to stdout

YARN

./run_yarn_job.py {sla|test}

Lists all yarn applications from the resource manager into stdout

3. Demo Steps

  1. Open a terminal

  2. Download the Docker Compose file

    1. Save the below file as docker-compose.yml somewhere on your machine

    2. docker-compose.yml

  3. Go to your working directory

    1. Wherever you downloaded the docker-compose.yml to

    2. cd YourWorkDir

  4. Start the demo containers

    1. docker-compose up -d

      1. The first time you run this it could take quite a while as it has to download 5 large images

      2. This will use docker compose to bring up 6 containers

        1. genie_demo_app_4.3.0

          1. Instantiation of netflixoss/genie-app:4.3.0

          2. Image from official Genie build which runs Genie app server

          3. Maps port 8080 for Genie UI

        2. genie_demo_apache_4.3.0

          1. Instantiation of netflixoss/genie-demo-apache:4.3.0

          2. Extension of apache image which includes files used during demo that Genie will download

        3. genie_demo_client_4.3.0

          1. Instantiation of netflixoss/genie-demo-client:4.3.0

          2. Simulates a client node for Genie which includes several python scripts to configure and run jobs on Genie

        4. genie_demo_hadoop_prod_4.3.0 and genie_demo_hadoop_test_4.3.0

          1. Instantiations of sequenceiq/hadoop-docker:2.7.1

          2. Simulates having two clusters available and registered with Genie with roles as a production and a test cluster

          3. See Hadoop Interfaces table for list of available ports

        5. genie_demo_trino_4.3.0

          1. Instantiation of trinodb/trino:374

          2. Single node Trino cluster

          3. Web UI bound to localhost port 9090

  5. Wait for all services to start

    1. Verify Genie UI and both Resource Manager UI’s are available via your browser

  6. Check out the Genie UI

    1. In a browser navigate to the Genie UI and notice there are no Jobs, Clusters, Commands or applications currently

    2. These are available by clicking on the tabs in the top left of the UI

  7. Login to the client container

    1. From terminal docker exec -it genie_demo_client_4.3.0 /bin/bash

      1. This should put you into a bash shell in /apps/genie/example within the running container

  8. Initialize the System

    1. Back in the terminal initialize the configurations for the two clusters (prod and test), 5 commands (hadoop, hdfs, yarn, spark-submit, spark-shell) and two application (hadoop, spark)

    2. ./init_demo.py

    3. Feel free to cat the contents of this script to see what is happening

  9. Verify Configurations Loaded

    1. In the browser browse the Genie UI again and verify that now Clusters, Commands and Applications have data in them

  10. Run some jobs

    1. See the Job Scripts table for available commands

    2. For example:

      1. ./run_hadoop_job.py test

      2. ./run_yarn_job.py test

      3. ./run_hdfs_job.py test

      4. ./run_spark_submit_job.py sla 2.1.3

      5. ./run_trino_job.py

    3. Replace test with, sla to run the jobs against the Prod cluster

    4. If any of the Docker container crashes, you may need to increase the default memory available in the Docker preferences. The current default for a fresh installation is 2GB, which is not sufficient for this demo. Use docker stats to verify the limit is 4GB or higher.

  11. For each of these jobs you can see their status, output and other information via the UI’s

    1. In the Jobs tab of the Genie UI you can see all the job history

      1. Clicking any row will expand that job information and provide more links

      2. Clicking the folder icon will bring you to the working directory for that job

    2. Go to the respective cluster Resource Manager UI’s and verify the jobs ran on their respective cluster

  12. Move load from prod to test

    1. Lets say there is something wrong with the production cluster. You don’t want to interfere with users but you need to fix the prod cluster. Let’s switch the load over to the test cluster temporarily using Genie

    2. In terminal switch the prod tag sched:sla from Prod to Test cluster

      1. ./move_tags.py

    3. Verify in Genie UI Clusters tab that the sched:sla tag only appears on the GenieDemoTest cluster

  13. Run more of the available jobs

    1. Verify that all jobs went to the GenieDemoTest cluster and none went to the GenieDemoProd cluster regardless of which env you passed into the Gradle commands above

  14. Reset the system

    1. You’ve resolved the issues with your production cluster. Move the sched:sla tag back

    2. ./reset_tags.py

    3. Verify in Genie UI Clusters tab that sched:sla tag only appears on GenieDemoProd cluster

  15. Run some jobs

    1. Verify jobs are again running on Prod and Test cluster based on environment

  16. Explore the scripts

    1. Look through the scripts to get a sense of what is submitted to Genie

  17. Log out of the container

    1. exit

  18. Login to the main Genie app container (which it contains the agent CLI )

    1. From terminal docker exec -it genie_demo_app_4.3.0 /bin/bash

  19. Verify you can launch the agent

    1. java -jar /usr/local/bin/genie-agent.jar help

  20. Verify the agent can connect to the local Genie server

    1. java -jar /usr/local/bin/genie-agent.jar ping --serverHost localhost --serverPort 9090

  21. Launch a Genie job, similar to the ones above

    1. java -jar /usr/local/bin/genie-agent.jar exec --serverHost localhost --serverPort 9090 --jobName 'Genie Demo CLI Trino Job' --commandCriterion 'TAGS=type:trino' --clusterCriterion 'TAGS=sched:adhoc,type:trino' — --execute 'select * from tpcds.sf1.item limit 100;'

    2. java -jar /usr/local/bin/genie-agent.jar exec --serverHost localhost --serverPort 9090 --jobName 'Genie Demo CLI Spark Shell Interactive Job' --commandCriterion 'TAGS=type:spark-shell' --clusterCriterion 'TAGS=sched:sla,type:yarn' --interactive

      1. This starts an interactive Spark shell. Hit ctrl-d to exit gracefully

  22. In the Genie UI, explore the two jobs

    1. Notice how the first one (non-interactive) dumped the query results in a stdout

    2. Notice how the second one (interactive) does not create stdout and stderr files, since the streams are presented directly in the shell

  23. Log out of the container

    1. exit

  24. Once you’re done trying everything out you can shut down the demo

    1. docker-compose down

    2. This will stop and remove all the containers from the demo. The images will remain on disk and if you run the demo again it will startup much faster since nothing needs to be downloaded or built.

4. Feedback

If you have any feedback about this demo feel free to reach out to the Genie team via any of the communication methods listed in the Contact page.