Commit Graph

37 Commits

Author SHA1 Message Date
Brian Brazil
d532272520 Add stalemarkers to synthetic series too when target stops. 2017-05-16 18:33:51 +01:00
Brian Brazil
b87d3ca9ea Create stale markers when a target is stopped.
When a target is no longer returned from SD stop()
is called. However it may be recreated before the
next scrape interval happens. So we wait to set stalemarkers
until the scrape of the new target would have happened
and been ingested, which is 2 scrape intervals.

If we're shutting down the context will be cancelled,
so return immediately rather than holding things up for potentially
minutes waiting to safely set stalemarkers no newer than now.
If the server starts immediately back up again all is well.
If not, we're missing some stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil
3c45400130 Don't fail scrape if one sample violates ordering.
In Prometheus 1.x one sample that is out of order
or that has a duplicate timestamp is discarded, and
the rest of the scrape ingestion continues on.
This will now also be true for 2.0.
2017-05-16 18:33:51 +01:00
Brian Brazil
fd5c5a50a3 Add stale markers on parse error.
If we fail to parse the result of a scrape,
we should treat that as a failed scrape and
add stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil
c0c7e32e61 Treat a failed scrape as an empty scrape for staleness.
If a target has died but is still in SD, we want the previously
scraped values to go stale. This would also apply to brief blips.
2017-05-16 18:33:51 +01:00
Brian Brazil
850ea412ad If an explicit timestamp is provided, bypass staleness. 2017-05-16 18:33:51 +01:00
Brian Brazil
5060a0fc51 Add unittests for ingestion stale NaNs 2017-05-16 18:33:51 +01:00
Brian Brazil
4f35952cf3 Inject a stale NaN when sample disappears between scrapes. 2017-05-16 18:33:51 +01:00
Brian Brazil
beaa7d5a43 Move consistent NaN logic into the parser. 2017-05-16 18:33:51 +01:00
Brian Brazil
76acf7b9b1 Ensure all the NaNs we ingest have the same bit pattern. 2017-05-16 18:33:51 +01:00
Fabian Reinartz
73b8ff0ddc Merge branch 'master' into dev-2.0 2017-04-27 10:19:55 +02:00
Matt Layher
5e4f5fb5ad retrieval: make scrape timeout header consistent with others 2017-04-05 14:56:22 -04:00
Matt Layher
fe4b6693f7 retrieval: add Scrape-Timeout-Seconds header to each scrape request (#2565)
Fixes #2508.
2017-04-04 18:26:28 +01:00
Fabian Reinartz
1d3cdd0d67 Merge branch 'master' into dev-2.0-rebase 2017-01-30 17:43:01 +01:00
Fabian Reinartz
c691895a0f retrieval: cache series references, use pkg/textparse
With this change the scraping caches series references and only
allocates label sets if it has to retrieve a new reference.
pkg/textparse is used to do the conditional parsing and reduce
allocations from 900B/sample to 0 in the general case.
2017-01-16 12:03:57 +01:00
Fabian Reinartz
ad9bc62e4c storage: extend appender and adapt it 2017-01-13 14:48:01 +01:00
beorn7
5dc01202d7 Retrieval: Remove some test lines that fail on Travis only
These lines exercise an append in
TestScrapeLoopWrapSampleAppender. Arguably, append shouldn't be tested
there in the first place.

Still it's weird why this fails on Travis:

```
--- FAIL: TestScrapeLoopWrapSampleAppender (0.00s)
    scrape_test.go:259: Expected count of 1, got 0
    scrape_test.go:290: Expected count of 1, got 0
2017/01/07 22:48:26 http: TLS handshake error from 127.0.0.1:50716: read tcp 127.0.0.1:40265->127.0.0.1:50716: read: connection reset by peer
FAIL
FAIL	github.com/prometheus/prometheus/retrieval	3.603s
```

Should anybody ever find out why, please revert this commit accordingly.
2017-01-08 00:01:46 +01:00
beorn7
3610331eeb Retrieval: Do not buffer the samples if no sample limit configured
Also, simplify and streamline the code a bit.
2017-01-07 18:18:54 +01:00
Fabian Reinartz
e631a1260d retrieval: use separate appender per target 2016-12-30 21:35:35 +01:00
Fabian Reinartz
f8fc1f5bb2 *: migrate ingestion to new batch Appender 2016-12-29 11:03:56 +01:00
Brian Brazil
30448286c7 Add sample_limit to scrape config.
This imposes a hard limit on the number of samples ingested from the
target. This is counted after metric relabelling, to allow dropping of
problemtic metrics.

This is intended as a very blunt tool to prevent overload due to
misbehaving targets that suddenly jump in sample count (e.g. adding
a label containing email addresses).

Add metric to track how often this happens.

Fixes #2137
2016-12-16 15:10:09 +00:00
Brian Brazil
c8de1484d5 Add scrape_samples_post_metric_relabeling
This reports the number of samples post any keep/drop
from metric relabelling.
2016-12-13 17:32:11 +00:00
Brian Brazil
06b9df65ec Refactor and add unittests to scrape result handling. 2016-12-13 16:49:17 +00:00
Brian Brazil
b5ded43594 Allow buffering of scraped samples before sending them to storage. 2016-12-13 15:01:35 +00:00
Fabian Reinartz
d7f4f8b879 discovery: move TargetSet into discovery package 2016-11-23 09:14:44 +01:00
Fabian Reinartz
7ecc271411 Move Fatalf call into main test goroutine 2016-11-13 18:21:42 +01:00
Tobias Schmidt
29ced0090f Fix common english misspellings 2016-09-14 23:23:28 -04:00
Anders Daljord Morken
95cadd0702 Run scrape loop with interval 1 instead of 0
0 is considered an invalid interval by time.NewTicker() and will cause a
panic if control reaches that point. Given the vagaries of timekeeping,
this may occasionally happen and make this test unstable.
2016-08-18 09:39:11 +02:00
Julius Volz
97b018d26d Fix format argument in retrieval test. 2016-05-01 23:37:45 +02:00
Fabian Reinartz
895f2f092f Fix flaky scrape test
t
2016-03-09 16:00:33 +01:00
Fabian Reinartz
50c2f20756 Add targetScraper tests 2016-03-01 14:33:28 +01:00
Fabian Reinartz
0d7105abee Remove scrape config from Target.
This commit removes the scrapeConfig entirely from Target.
All identity defining parameters are thus immutable now and the mutex
can be removed..

Target identity is now correctly defined by the labels and the full URL.
This in particular includes URL parameters that are not specified in the
label set.

Fingerprint is also removed from hash to remove an unnecessary tight coupling
to the common/model package.
2016-03-01 14:32:57 +01:00
Fabian Reinartz
75681b691a Extract HTTP client from Target.
The HTTP client is the same across all targets with the same
scrape configuration. Thus, this commit moves it into the scrape
pool.
2016-03-01 14:31:57 +01:00
Fabian Reinartz
9bea27ae8a Add scraping tests 2016-03-01 14:00:48 +01:00
Fabian Reinartz
775316f8d2 Move appender construction from Target to scrapePool 2016-03-01 13:50:51 +01:00
Fabian Reinartz
1a3253e8ed Make scrape time unambigious.
This commit changes the scraper interface to accept a timestamp
so the reported timestamp by the caller and the timestamp
attached to samples does not differ.
2016-03-01 13:48:36 +01:00
Fabian Reinartz
2bb8ef99d1 Test scrape loop behavior. 2016-03-01 13:48:36 +01:00