(Go: >> BACK << -|- >> HOME <<)

Page MenuHomePhabricator

Benchmark Blazegraph import with increased buffer capacity (and other factors)
Closed, ResolvedPublic

Description

In T359062: Assess Wikidata dump import hardware there's compelling evidence that increasing buffer capacity for import, that is to say updating RWStore.properties for a value of com.bigdata.rdf.sail.bufferCapacity=1000000, leads to a material performance improvement, as observed on a gaming-class desktop.

This task is to request that we soon verify on a WDQS node in the data center, preferably ahead of any further imports with changed graph split definitions.

At this point it seems clear that CPU speed, disk speed, and the buffer capacity make a meaningful difference in import time.

Proposed:

Using the scholarly_articles split files, on wdqs2024, run imports as follows.

  1. With the CPU performance governor configuration applied as described in T336443#9726600 and with the existing default RWStore.properties configuration (which will have com.bigdata.rdf.sail.bufferCapacity=100000, note this is 100_000). This will let us better understand for the R450 setup if the performance benefits for the performance governor configuration (sort of an analog of a faster processor like what we've seen with a gaming-class desktop) extend to this bulk ingestion routine. We could compare against results from T350465#9405888 .
  2. Then, still with the CPU performance governor configuration in place, using a RWStore.properties with a value of com.bigdata.rdf.sail.bufferCapacity=1000000 (note this is 1_000_000). This will let us verify that for this hardware class the performance benefits are further extended.
  3. If and when a high speed NVMe is installed onto wdqs2024 (T361216), with both the CPU performance governor and higher buffer capacity pieces in place. This will let us verify that for this hardware class the performance benefits are even further extended.

We had used wdqs1024 for the main graph ("non-scholarly") import before, and note the request here is to do the scholarly article graph import on wdqs2024. This is mainly because we have an NVMe request in flight for it.

Event Timeline

dr0ptp4kt renamed this task from Benchmark Blazegraph import with increased buffer capacity to Benchmark Blazegraph import with increased buffer capacity (and other factors).Apr 18 2024, 6:18 PM
Gehel triaged this task as Low priority.Apr 23 2024, 1:03 PM
Gehel moved this task from Incoming to Operations/SRE on the Wikidata-Query-Service board.

We'll use wdqs2023 to compare against wdqs1023 (same hardware). 1023 is scholarly-articles so that's what we'll want to load 2023 with.

Actually we'll use wdqs1021 since we're not confident NFS will work seamlessly between eqiad and codfw (we use nsf to procure the dumps that the data-reload is run from).

Gearing up to kick off a data reload on wdqs1021 after a quick patch to enable NFS on it

Change #1026668 had a related patch set uploaded (by Ryan Kemper; author: Ryan Kemper):

[operations/puppet@production] wdqs: enable nfs data reloads on wdqs1021

https://gerrit.wikimedia.org/r/1026668

I'm realizing I don't remember enough about how we load specific graph splits (scholarly vs main). But it's possible we won't need the above nfs patch if our previous process was to manually download the relevant dump file.

@dcausse We want to do a data reload of the scholarly graph for a WDQS host like wdqs2023. What's the process for that again?

@RKemper I think that's captured in P54284 . If you need to get a copy of the files, there's a pointer in T350106#9381611 for how one might go about copying from HDFS to the local filesystem and then there's other stuff in the rest of the ticket about the data transfer. I kept a copy of the files at stat1006:/home/dr0ptp4kt/gzips/nt_wd_schol so those should be ready to be copied over if that helps at all.

Another thing that can be nice for figuring out stuff later is to add some timing and a simple log file. A command like the following was helpful when I was trying this out on the gaming-class desktop (you may not need this if your tmux session lets you scroll back really far, but it's kind of nice for tailing even without tmux).

date | tee loadData.log; time ./loadData.sh -n wdq -d /mnt/firehose/split_0/nt_wd_schol -s 0 -e 0 2>&1 | tee -a loadData.log; time ./loadData.sh -n wdq -d /mnt/firehose/split_0/nt_wd_schol 2>&1 | tee -a loadData.log

Change #1027001 had a related patch set uploaded (by Ryan Kemper; author: Ryan Kemper):

[operations/puppet@production] wdqs: switch wdqs2023 to graph split host

https://gerrit.wikimedia.org/r/1027001

Mentioned in SAL (#wikimedia-operations) [2024-05-03T21:27:31Z] <ryankemper> T362920 [wdqs] Depooled wdqs2023 in preparation to switch it to a graph split host

Change #1027001 merged by Ryan Kemper:

[operations/puppet@production] wdqs: switch wdqs2023 to graph split host

https://gerrit.wikimedia.org/r/1027001

Mentioned in SAL (#wikimedia-operations) [2024-05-03T21:38:44Z] <ryankemper@cumin2002> START - Cookbook sre.hosts.downtime for 6 days, 0:00:00 on wdqs2023.codfw.wmnet with reason: T362920

Mentioned in SAL (#wikimedia-operations) [2024-05-03T21:38:48Z] <ryankemper@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6 days, 0:00:00 on wdqs2023.codfw.wmnet with reason: T362920

Change #1026668 abandoned by Ryan Kemper:

[operations/puppet@production] wdqs: enable nfs data reloads on wdqs1021

Reason:

don't need nfs for graph split reload

https://gerrit.wikimedia.org/r/1026668

Mirroring comment in T359062#9775908:

In T362920: Benchmark Blazegraph import with increased buffer capacity (and other factors) we saw that this took about 3702 minutes, or about 2.57 days, for the scholarly article entity with the CPU governor change (described in T336443#9726600 ) alone on wdqs2023.

The count matches T359062#9695544.

select (count(*) as ?ct)
where {?s ?p ?o}

7643858078

@dr0ptp4kt

we saw that this took about 3702 minutes, or about 2.57 hours

Typo you'll want to fix here and in the original: 2.57 days

Kicked off the second run like so:

(1) vi /etc/wdqs/RWStore.wikidata.properties (added a trailing 0 to buffersize)

(2) date | tee /home/ryankemper/loadData2.log; time sudo /home/ryankemper/loadData.sh -n wdq -d /srv/T362920/nt_wd_schol -s 0 -e 0 2>&1 | tee -a /home/ryankemper/loadData2.log; time sudo /home/ryankemper/loadData.sh -n wdq -d /srv/T362920/nt_wd_schol 2>&1 | tee -a /home/ryankemper/loadData2.log; echo -en "\nrun finished at " | tee /home/ryankemper/loadData2.log; date | tee /home/ryankemper/loadData2.log

@dr0ptp4kt

we saw that this took about 3702 minutes, or about 2.57 hours

Typo you'll want to fix here and in the original: 2.57 days

I think this is what is referred to as wishful thinking! Okay, updated the comment in the other ticket's comment and in the comment up above.

Mirroring comment in T359062#9783010:

And for the second run in T362920: Benchmark Blazegraph import with increased buffer capacity (and other factors) we saw that this took about 3089 minutes, or about 2.15 days, for the scholarly article entity graph with the CPU governor change (described in T336443#9726600 ) plus the bufferCapacity at 1000000 on wdqs2023.

dr0ptp4kt claimed this task.

Thanks @RKemper ! These speed gains are welcome news. We should discuss in a near future meeting if there are any further actions. I can see how we may want to set the bufferCapacity to 1000000 for imports, whereas we may want to just continue running with a bufferCapacity of 100000 once a node is in serving mode, but good topic for discussion.