Lab 6.2: Scaling Elasticsearch

Objective:

In this lab, you will allocate shards to optimize write throughput during an initial load, then configure replicas to optimize reads and availability of the index. You will also scale up your cluster by adding one node.

  1. Suppose you want to index many documents into a newly-created index. You can improve write speed by disabling replicas (typically risky, but fine for this scenario if you have the data backed up somewhere else). Create a new index that satisfies the following requirements:

    • the name of the index is temp1
    • has four primary shards
    • has zero replica shards
    • refreshing is disabled on the index
    Solution
    PUT temp1
    {
      "settings": {
        "number_of_shards": 4,
        "number_of_replicas": 0,
        "refresh_interval": -1
      }
    }
    
  2. Before loading data into temp1, run the following _cat command to review its shard allocation:

    GET _cat/shards/temp1?v&h=index,shard,prirep,state,node&s=index,shard,prirep
    
    You should get a response similar to the following:
    index shard prirep state   node
    temp1 0     p      STARTED node2
    temp1 1     p      STARTED node3
    temp1 2     p      STARTED node1
    temp1 3     p      STARTED node2
    
    Notice that two primary shards are on the same node.

  3. Add one more node to your cluster by running the following command in your terminal:

    docker-compose -f /home/ubuntu/docker-compose.yml start node4
    

  4. Wait a few seconds for your node to join the cluster. Run the following _cat command to check if the new node joined the cluster:

    GET _cat/nodes?v
    
    You should see four nodes.

  5. Review the shard allocation of the temp1 index. One of the primary shards has been reallocated. You should now have a similar response:

    index shard prirep state   node
    temp1 0     p      STARTED node3
    temp1 1     p      STARTED node4
    temp1 2     p      STARTED node2
    temp1 3     p      STARTED node1
    

  6. Your cluster is now ready. Using the Reindex API, reindex the documents from web_traffic into temp1 where user_agent.os.name.keyword equals "Android" (which will be 69,630 documents). It should take about 10-20 seconds for the reindex to execute.

    Solution
    POST _reindex
    {
      "source": {
        "index": "web_traffic",
        "query": {
          "match": {
            "user_agent.os.name.keyword": "Android"
          }
        }
      },
      "dest": {
        "index": "temp1"
      }
    }
    
  7. Let's see how many documents ended up in temp1. Notice there are 0 documents! Why?

    GET temp1/_count
    

    Solution

    Because you needed to load a large amount of data at once, you disabled refresh by setting index.refresh_interval to -1. You need to perform a refresh before the documents are searchable.

  8. To make your documents searchable, you can manually trigger a refresh for the temp1 index:

    POST temp1/_refresh
    

  9. Now, recheck the count. You should have 69,630 documents in temp1 now.

  10. Now that you finished your initial load, enable refresh on temp1 by setting the refresh interval to the default value of 1 second.

    Solution

    PUT temp1/_settings
    {
      "index.refresh_interval": "1s"
    }
    
    You can also set it to nullto restore the default value.

  11. Now that we are done with the initial data loading into temp1, it is good to add some replicas to the index. Having a replica on each node increases read throughput. Run the following command, which auto-expands the number of replicas based on the number of data nodes in the cluster:

    PUT temp1/_settings
    {
      "index.auto_expand_replicas": "0-all"
    }
    
    NOTE: The auto-expanded number of replicas does not take any other allocation rules into account, such as shard allocation awareness, or total shards per node, which can lead to the cluster health from becoming yellow if the applicable rules prevent all the replicas from being allocated.

  12. Run the following _cat command to review temp1 shard allocation. How many replica shards do you have after enabling auto-expanded replicas?

    GET _cat/shards/temp1?v&h=index,shard,prirep,state,node&s=index,shard,prirep
    

    Solution

    You should see 16 shards now, which means three replicas were created from each of the four initial primary shards.

  13. Finally, remove node4 from the cluster by running this command in the terminal:

    docker-compose -f /home/ubuntu/docker-compose.yml stop node4
    

  14. Run the following _cat command to review temp1 shard allocation.

    GET _cat/shards/temp1?v&h=index,shard,prirep,state,node&s=index,shard,prirep
    
    Even if one node is down, a replica has been promoted to primary. You should have all the primaries available, and the number of replicas has been automatically reduced.

  15. We are done examining shards and replicas, so feel free to delete the temp1 index:

    DELETE temp1
    

Summary:

In this lab, you explored how to optimize write throughput during an initial load and auto-scale for reads using replicas.