Odysseus Benchmarking

User avatar
Marco Grawunder
Posts: 272
Joined: Tue Jul 29, 2014 10:29 am
Location: Oldenburg, Germany
Contact:

Re: Odysseus Benchmarking

Post by Marco Grawunder » Fri Feb 10, 2017 2:58 pm

What happens when you remove the csv-storing?

stefan
Posts: 85
Joined: Tue Jul 12, 2016 1:03 pm

Re: Odysseus Benchmarking

Post by stefan » Fri Feb 10, 2017 3:15 pm

Hi,

then I get the latencies. :) Thanks.
Ok, thats great. But the latency is always lend-minlstart and could not be changed to lend-maxlstart, correct?

I have to see what I can do with this.
Thanks so far!

stefan
Posts: 85
Joined: Tue Jul 12, 2016 1:03 pm

Re: Odysseus Benchmarking

Post by stefan » Fri Feb 10, 2017 3:22 pm

I looked in the latency code. I dont want to mess things up but if I would change in latency.java the line:

Code: Select all

t.setAttribute(3, getLatency());
to

Code: Select all

t.setAttribute(3, getMaxLatency());
this should actually work without affecting something.

Its just an idea at the moment. I am not sure if the evaluation feature itself is enough for me.

User avatar
Marco Grawunder
Posts: 272
Joined: Tue Jul 29, 2014 10:29 am
Location: Oldenburg, Germany
Contact:

Re: Odysseus Benchmarking

Post by Marco Grawunder » Mon Feb 13, 2017 9:53 am

Latency is defined as the time between entering the system and creating an output. In most cases minlatency and maxlatency are the same. But when you have e.g. an aggregation or a join, the latency of the youngest element leading to the result is used. In other cases, (e.g. at the join ) the data distribution would be used and thats not a fair metric to evalute the processing of the system.

Maybe we can extend the evaluation feature to allow to use the max latency value, too.

stefan
Posts: 85
Joined: Tue Jul 12, 2016 1:03 pm

Re: Odysseus Benchmarking

Post by stefan » Wed Feb 15, 2017 11:47 pm

Hmm, I understand your point. I think, my case is a bit special.
I create nodes and relations. As soon as all nodes and relations are created I start my query. The nodes and relations are read and joined. This stream is aggregated to get some sums and joined again to the initial stream. In that cases the min latency would be set to the latest join, correct? In that case its not that good. But ok, I can work with this for the moment.

Another question/suggestion: If I try to evaluate the part of my query that reads data from a csv file and writes it to RabbitMQ, I dont get the latencies, only the throughput. In that case it is not optimal because I cannot simply delete the sender operator for the RabbitMQ because the results are very different if there is no need to write this to RabbitMQ. I will create my values manually, so thats fine for me. Its just a hint. :)

stefan
Posts: 85
Joined: Tue Jul 12, 2016 1:03 pm

Re: Odysseus Benchmarking

Post by stefan » Thu Feb 16, 2017 1:14 am

And for some reasons it does not work in the standalone Odysseus, just if I use eclipse to startup Odysseus. I checked the features: Evaluation, Latency, Datarate is installed.

I can start the evaluation job, but in the created folder there is only the query and the model.eval. The evaluation job finishes in a very short time period, there was no processing but no output in the error log also. Strange.

I will go ahead with the eclipse version...

User avatar
Marco Grawunder
Posts: 272
Joined: Tue Jul 29, 2014 10:29 am
Location: Oldenburg, Germany
Contact:

Re: Odysseus Benchmarking

Post by Marco Grawunder » Thu Feb 16, 2017 9:30 am

I create nodes and relations. As soon as all nodes and relations are created I start my query. The nodes and relations are read and joined. This stream is aggregated to get some sums and joined again to the initial stream. In that cases the min latency would be set to the latest join, correct? In that case its not that good. But ok, I can work with this for the moment.
In a join the min latency is the value of the incoming tuple that is joined, i.e. the tuple that waits for a join partner is not the reason for the join latency.
Another question/suggestion: If I try to evaluate the part of my query that reads data from a csv file and writes it to RabbitMQ, I dont get the latencies, only the throughput. In that case it is not optimal because I cannot simply delete the sender operator for the RabbitMQ because the results are very different if there is no need to write this to RabbitMQ. I will create my values manually, so thats fine for me. Its just a hint.
Yes, this is a bug and I created a ticket for that.

User avatar
Marco Grawunder
Posts: 272
Joined: Tue Jul 29, 2014 10:29 am
Location: Oldenburg, Germany
Contact:

Re: Odysseus Benchmarking

Post by Marco Grawunder » Thu Feb 16, 2017 10:22 am

And for some reasons it does not work in the standalone Odysseus, just if I use eclipse to startup Odysseus. I checked the features: Evaluation, Latency, Datarate is installed.

I can start the evaluation job, but in the created folder there is only the query and the model.eval. The evaluation job finishes in a very short time period, there was no processing but no output in the error log also. Strange.

I will go ahead with the eclipse version...
Hmm. The queries can be started by hand but the evaluation job does not work? Maybe there is a missing dependency. I will have a look at that.

User avatar
Marco Grawunder
Posts: 272
Joined: Tue Jul 29, 2014 10:29 am
Location: Oldenburg, Germany
Contact:

Re: Odysseus Benchmarking

Post by Marco Grawunder » Thu Feb 16, 2017 10:37 am

Ok, I think, the System Load Feature is missing. I added the required feature now to thr evaluation feature. An update should hopefully fix the first problem.

User avatar
Marco Grawunder
Posts: 272
Joined: Tue Jul 29, 2014 10:29 am
Location: Oldenburg, Germany
Contact:

Re: Odysseus Benchmarking

Post by Marco Grawunder » Thu Feb 16, 2017 11:04 am

Another question/suggestion: If I try to evaluate the part of my query that reads data from a csv file and writes it to RabbitMQ, I dont get the latencies, only the throughput. In that case it is not optimal because I cannot simply delete the sender operator for the RabbitMQ because the results are very different if there is no need to write this to RabbitMQ. I will create my values manually, so thats fine for me. Its just a hint.
Should be fixed now.

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest