mirror of
https://github.com/mozilla-services/syncstorage-rs.git
synced 2026-05-13 00:28:38 +02:00
refactor: remove pyramid (#2289)
Some checks failed
Glean probe-scraper / glean-probe-scraper (push) Has been cancelled
Main Workflow - Lint, Build, Test / python-env (push) Has been cancelled
Main Workflow - Lint, Build, Test / rust-env (push) Has been cancelled
Build, Tag and Push Container Images to GAR / check (push) Has been cancelled
Publish Sync docs to pages / build-mdbook (push) Has been cancelled
Publish Sync docs to pages / build-openapi (push) Has been cancelled
Main Workflow - Lint, Build, Test / python-checks (push) Has been cancelled
Main Workflow - Lint, Build, Test / rust-checks (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy (mysql) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy (postgres) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy (spanner) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy-release (mysql) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy-release (postgres) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy-release (spanner) (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-and-unit-test-postgres (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-postgres-image (push) Has been cancelled
Main Workflow - Lint, Build, Test / postgres-e2e-tests (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-and-unit-test-mysql (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-mysql-image (push) Has been cancelled
Main Workflow - Lint, Build, Test / mysql-e2e-tests (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-and-unit-test-spanner (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-spanner-image (push) Has been cancelled
Main Workflow - Lint, Build, Test / spanner-e2e-tests (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncstorage-rs (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncserver-postgres (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncstorage-rs-spanner-python-utils (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncserver-postgres-python-utils (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncserver-mysql (push) Has been cancelled
Publish Sync docs to pages / combine-and-prepare (push) Has been cancelled
Publish Sync docs to pages / deploy (push) Has been cancelled
Some checks failed
Glean probe-scraper / glean-probe-scraper (push) Has been cancelled
Main Workflow - Lint, Build, Test / python-env (push) Has been cancelled
Main Workflow - Lint, Build, Test / rust-env (push) Has been cancelled
Build, Tag and Push Container Images to GAR / check (push) Has been cancelled
Publish Sync docs to pages / build-mdbook (push) Has been cancelled
Publish Sync docs to pages / build-openapi (push) Has been cancelled
Main Workflow - Lint, Build, Test / python-checks (push) Has been cancelled
Main Workflow - Lint, Build, Test / rust-checks (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy (mysql) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy (postgres) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy (spanner) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy-release (mysql) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy-release (postgres) (push) Has been cancelled
Main Workflow - Lint, Build, Test / clippy-release (spanner) (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-and-unit-test-postgres (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-postgres-image (push) Has been cancelled
Main Workflow - Lint, Build, Test / postgres-e2e-tests (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-and-unit-test-mysql (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-mysql-image (push) Has been cancelled
Main Workflow - Lint, Build, Test / mysql-e2e-tests (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-and-unit-test-spanner (push) Has been cancelled
Main Workflow - Lint, Build, Test / build-spanner-image (push) Has been cancelled
Main Workflow - Lint, Build, Test / spanner-e2e-tests (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncstorage-rs (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncserver-postgres (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncstorage-rs-spanner-python-utils (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncserver-postgres-python-utils (push) Has been cancelled
Build, Tag and Push Container Images to GAR / build-and-push-syncserver-mysql (push) Has been cancelled
Publish Sync docs to pages / combine-and-prepare (push) Has been cancelled
Publish Sync docs to pages / deploy (push) Has been cancelled
* rmv aclauth policy, dottednameresolver, conftest calls * use simple name space as config object * remove Configurator * remove pyramid_hawkauth stuff without inheriting HawkAuthPolicy pyramid, custom base class * change mock_oauth_jwk to not use pyramid * move to configparser in test_support * rmv needless imports and pull out remaining unit test case for configurator * remove tests.ini support from conftest * update TestConfig to AuthConfig, remove useless stuff * needless no-op end fn * clean up poetry deps * extra cruft in test_support * get limit config to set defaults, update imports and refs
This commit is contained in:
parent
c00fcb36a3
commit
b3771c239c
1055
poetry.lock
generated
1055
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@ -24,8 +24,6 @@ konfig = "^1.1"
|
||||
mysqlclient = "^2.2.7"
|
||||
psutil = "^7.0.0"
|
||||
pyjwt = "^2.10.1"
|
||||
pyramid = "^1.10.8"
|
||||
pyramid-hawkauth = "^2.0.0"
|
||||
pyfxa = "0.8.1"
|
||||
pytest = "^9.0.3"
|
||||
requests = "^2.33.1"
|
||||
|
||||
@ -5,15 +5,14 @@
|
||||
|
||||
Fixture hierarchy
|
||||
─────────────────
|
||||
st_ctx — function-scoped composite: sets up Pyramid configurator, creates a
|
||||
hawk-signed TestApp, seeds a random user, clears that user's data,
|
||||
and yields a plain dict consumed by test functions.
|
||||
st_ctx — function-scoped composite: creates a hawk-signed TestApp, seeds a
|
||||
random user, clears that user's data, and yields a plain dict
|
||||
consumed by test functions.
|
||||
|
||||
Helper functions and constants live in helpers.py.
|
||||
"""
|
||||
|
||||
import os
|
||||
import uuid
|
||||
|
||||
import pytest
|
||||
|
||||
@ -22,36 +21,24 @@ from tools.integration_tests.helpers import (
|
||||
make_test_app,
|
||||
retry_delete,
|
||||
)
|
||||
from tools.integration_tests.test_support import get_test_configurator
|
||||
from tools.integration_tests.test_support import (
|
||||
FixedSecrets,
|
||||
TokenServerAuthenticationPolicy,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def st_ctx():
|
||||
"""Functional test context for storage API tests.
|
||||
|
||||
Sets up a Pyramid configurator, creates a TestApp with hawk signing,
|
||||
authenticates a random user, clears that user's data, and yields a
|
||||
context dict. Tears down configurator on exit.
|
||||
Creates a TestApp with hawk signing, authenticates a random user,
|
||||
clears that user's data, and yields a context dict.
|
||||
"""
|
||||
ini_file = os.environ.get("MOZSVC_TEST_INI_FILE", "tests.ini")
|
||||
os.environ["MOZSVC_UUID"] = str(uuid.uuid4())
|
||||
if "MOZSVC_SQLURI" not in os.environ:
|
||||
os.environ["MOZSVC_SQLURI"] = "sqlite:///:memory:"
|
||||
if "MOZSVC_ONDISK_SQLURI" not in os.environ:
|
||||
ondisk = os.environ["MOZSVC_SQLURI"]
|
||||
if ":memory:" in ondisk:
|
||||
ondisk = "sqlite:////tmp/tests-sync-%s.db" % os.environ["MOZSVC_UUID"]
|
||||
os.environ["MOZSVC_ONDISK_SQLURI"] = ondisk
|
||||
|
||||
# Locate tests.ini relative to this file
|
||||
this_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
config = get_test_configurator(this_dir, ini_file)
|
||||
config.commit()
|
||||
config.make_wsgi_app()
|
||||
|
||||
secret = os.environ.get("SYNC_MASTER_SECRET", "TED KOPPEL IS A ROBOT")
|
||||
auth_policy = TokenServerAuthenticationPolicy(secrets=FixedSecrets(secret))
|
||||
host_url = os.environ.get("SYNC_SERVER_URL", "http://localhost:8000")
|
||||
|
||||
auth = make_auth_state(config, host_url)
|
||||
auth = make_auth_state(auth_policy, host_url)
|
||||
auth_state = {
|
||||
"auth_token": auth["auth_token"],
|
||||
"auth_secret": auth["auth_secret"],
|
||||
@ -70,9 +57,6 @@ def st_ctx():
|
||||
"hashed_fxa_uid": auth["hashed_fxa_uid"],
|
||||
"fxa_kid": auth["fxa_kid"],
|
||||
"auth_state": auth_state,
|
||||
"config": config,
|
||||
"auth_policy": auth_policy,
|
||||
"host_url": host_url,
|
||||
}
|
||||
|
||||
config.end()
|
||||
del os.environ["MOZSVC_UUID"]
|
||||
|
||||
@ -13,17 +13,13 @@ import logging
|
||||
import os
|
||||
import random
|
||||
import time
|
||||
from types import SimpleNamespace
|
||||
import uuid
|
||||
|
||||
import hawkauthlib
|
||||
import webtest
|
||||
from pyramid.interfaces import IAuthenticationPolicy
|
||||
from pyramid.request import Request
|
||||
from webtest import TestApp
|
||||
|
||||
# max number of attempts to check server heartbeat
|
||||
SYNC_SERVER_STARTUP_MAX_ATTEMPTS = 35
|
||||
SYNC_SERVER_URL = os.environ.get("SYNC_SERVER_URL", "http://localhost:8000")
|
||||
|
||||
logger = logging.getLogger("tools.integration-tests")
|
||||
|
||||
@ -54,6 +50,17 @@ def _retry_send(func, *args, **kwargs):
|
||||
return func(*args, **kwargs)
|
||||
|
||||
|
||||
def _make_fake_request(host_url):
|
||||
"""Parse host url and provide a SimpleNamespace repr of the host_url and script name path."""
|
||||
import urllib.parse as urlparse
|
||||
|
||||
parsed = urlparse.urlparse(host_url)
|
||||
return SimpleNamespace(
|
||||
host_url=f"{parsed.scheme}://{parsed.netloc}",
|
||||
script_name=parsed.path,
|
||||
)
|
||||
|
||||
|
||||
def retry_post_json(app, *args, **kwargs):
|
||||
"""POST JSON with retry on transient errors."""
|
||||
return _retry_send(app.post_json, *args, **kwargs)
|
||||
@ -69,17 +76,17 @@ def retry_delete(app, *args, **kwargs):
|
||||
return _retry_send(app.delete, *args, **kwargs)
|
||||
|
||||
|
||||
def make_auth_state(config, host_url):
|
||||
def make_auth_state(auth_policy, host_url):
|
||||
"""Generate hawk credentials for a new random user."""
|
||||
global_secret = os.environ.get("SYNC_MASTER_SECRET")
|
||||
policy = config.registry.getUtility(IAuthenticationPolicy)
|
||||
policy = auth_policy
|
||||
if global_secret is not None:
|
||||
policy.secrets._secrets = [global_secret]
|
||||
user_id = random.randint(1, 100000)
|
||||
fxa_uid = "DECAFBAD" + str(uuid.uuid4().hex)[8:]
|
||||
hashed_fxa_uid = str(uuid.uuid4().hex)
|
||||
fxa_kid = "0000000000000-DECAFBAD" + str(uuid.uuid4().hex)[8:]
|
||||
req = Request.blank(host_url)
|
||||
req = _make_fake_request(host_url)
|
||||
creds = policy.encode_hawk_id(
|
||||
req,
|
||||
user_id,
|
||||
@ -148,12 +155,12 @@ def switch_user(st_ctx):
|
||||
orig_auth_token = st_ctx["auth_state"]["auth_token"]
|
||||
orig_auth_secret = st_ctx["auth_state"]["auth_secret"]
|
||||
|
||||
config = st_ctx["config"]
|
||||
auth_policy = st_ctx["auth_policy"]
|
||||
host_url = st_ctx["host_url"]
|
||||
app = st_ctx["app"]
|
||||
|
||||
for _ in range(10):
|
||||
new_auth = make_auth_state(config, host_url)
|
||||
new_auth = make_auth_state(auth_policy, host_url)
|
||||
if new_auth["user_id"] != orig_user_id:
|
||||
break
|
||||
else:
|
||||
|
||||
1185
tools/integration_tests/poetry.lock
generated
1185
tools/integration_tests/poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@ -16,13 +16,10 @@ package-mode = false
|
||||
[tool.poetry.dependencies]
|
||||
cryptography = "46.0.5"
|
||||
hawkauthlib = "^2.0.0"
|
||||
konfig = "^1.1"
|
||||
mysqlclient = "^2.2.7"
|
||||
psycopg2-binary = "^2.9.11"
|
||||
psutil = "^7.0.0"
|
||||
pyjwt = "^2.10.1"
|
||||
pyramid = "^1.10.8"
|
||||
pyramid-hawkauth = "^2.0.0"
|
||||
pyfxa = "0.8.1"
|
||||
pytest = "^9.0.3"
|
||||
requests = "^2.32.4"
|
||||
|
||||
@ -23,7 +23,6 @@ import urllib
|
||||
|
||||
import simplejson # type: ignore[import-untyped]
|
||||
|
||||
from pyramid.interfaces import IAuthenticationPolicy
|
||||
from webtest.app import AppError
|
||||
|
||||
import tokenlib
|
||||
@ -42,9 +41,20 @@ WEAVE_SIZE_LIMIT_EXCEEDED = 17 # Size limit exceeded
|
||||
BATCH_MAX_IDS = 100
|
||||
|
||||
|
||||
def get_limit_config(request, limit):
|
||||
"""Get the configured value for the named size limit."""
|
||||
return request.registry.settings["storage." + limit]
|
||||
def get_limit_config(limit):
|
||||
"""Get a default value for the named size limit.
|
||||
|
||||
This fallback is only reached when the server's /info/configuration
|
||||
endpoint does not return the requested limit key.
|
||||
"""
|
||||
defaults = {
|
||||
"max_post_bytes": 2097152,
|
||||
"max_post_records": 100,
|
||||
"max_request_bytes": 2101248,
|
||||
}
|
||||
if limit not in defaults:
|
||||
raise KeyError(f"unknown limit: {limit}")
|
||||
return defaults[limit]
|
||||
|
||||
|
||||
def json_dumps(value):
|
||||
@ -894,9 +904,9 @@ def test_multi_item_post_limits(st_ctx):
|
||||
max_count = res.json["max_post_records"]
|
||||
max_req_bytes = res.json["max_request_bytes"]
|
||||
except KeyError:
|
||||
max_bytes = get_limit_config(st_ctx["config"], "max_post_bytes")
|
||||
max_count = get_limit_config(st_ctx["config"], "max_post_records")
|
||||
max_req_bytes = get_limit_config(st_ctx["config"], "max_request_bytes")
|
||||
max_bytes = get_limit_config("max_post_bytes")
|
||||
max_count = get_limit_config("max_post_records")
|
||||
max_req_bytes = get_limit_config("max_request_bytes")
|
||||
|
||||
# Uploading max_count-5 small objects should succeed.
|
||||
bsos = [{"id": str(i).zfill(2), "payload": "X"} for i in range(max_count - 5)]
|
||||
@ -1518,7 +1528,7 @@ def test_accessing_info_collections_with_an_expired_token(st_ctx):
|
||||
assert resp.json["xxx_col1"] == ts
|
||||
|
||||
# Forge an expired token to use for the test.
|
||||
auth_policy = st_ctx["config"].registry.getUtility(IAuthenticationPolicy)
|
||||
auth_policy = st_ctx["auth_policy"]
|
||||
secret = auth_policy._get_token_secrets(st_ctx["host_url"])[-1]
|
||||
tm = tokenlib.TokenManager(secret=secret)
|
||||
exp = time.time() - 60
|
||||
|
||||
@ -1,107 +1,13 @@
|
||||
# This Source Code Form is subject to the terms of the Mozilla Public
|
||||
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
# You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
"""Base test class, with an instantiated app."""
|
||||
"""Support utilities for storage integration tests.
|
||||
|
||||
Provides auth policy classes and secrets management used by conftest.py fixtures.
|
||||
"""
|
||||
|
||||
import contextlib
|
||||
import functools
|
||||
from konfig import Config, SettingsDict
|
||||
import hawkauthlib
|
||||
import os
|
||||
from pyramid.authorization import ACLAuthorizationPolicy
|
||||
from pyramid.config import Configurator
|
||||
from pyramid.interfaces import IAuthenticationPolicy
|
||||
from pyramid.request import Request
|
||||
from pyramid.util import DottedNameResolver
|
||||
from pyramid_hawkauth import HawkAuthenticationPolicy
|
||||
import random
|
||||
import re
|
||||
import csv
|
||||
import binascii
|
||||
from collections import defaultdict
|
||||
import sys
|
||||
import time
|
||||
import tokenlib
|
||||
import urllib.parse as urlparse
|
||||
|
||||
# unittest imported by pytest requirement
|
||||
import unittest
|
||||
import uuid
|
||||
from webtest import TestApp
|
||||
from zope.interface import implementer
|
||||
|
||||
|
||||
VALID_FXA_ID_REGEX = re.compile("^[A-Za-z0-9=\\-_]{1,64}$")
|
||||
|
||||
|
||||
class Secrets(object):
|
||||
"""Load node-specific secrets from a file.
|
||||
This class provides a method to get a list of secrets for a node
|
||||
ordered by timestamps. The secrets are stored in a CSV file which
|
||||
is loaded when the object is created.
|
||||
Options:
|
||||
- **filename**: a list of file paths, or a single path.
|
||||
"""
|
||||
|
||||
def __init__(self, filename=None):
|
||||
self._secrets = defaultdict(list)
|
||||
if filename is not None:
|
||||
self.load(filename)
|
||||
|
||||
def keys(self):
|
||||
"""Return all node keys stored in secrets."""
|
||||
return self._secrets.keys()
|
||||
|
||||
def load(self, filename):
|
||||
"""Load secrets from the given filename or list of filenames."""
|
||||
if not isinstance(filename, (list, tuple)):
|
||||
filename = [filename]
|
||||
|
||||
for name in filename:
|
||||
with open(name, "rb") as f:
|
||||
reader = csv.reader(f, delimiter=",")
|
||||
for line, row in enumerate(reader):
|
||||
if len(row) < 2:
|
||||
continue
|
||||
node = row[0]
|
||||
if node in self._secrets:
|
||||
raise ValueError("Duplicate node line %d" % line)
|
||||
secrets = []
|
||||
for secret in row[1:]:
|
||||
secret = secret.split(":")
|
||||
if len(secret) != 2:
|
||||
raise ValueError("Invalid secret line %d" % line)
|
||||
secrets.append(tuple(secret))
|
||||
secrets.sort()
|
||||
self._secrets[node] = secrets
|
||||
|
||||
def save(self, filename):
|
||||
"""Save secrets to the given filename in CSV format."""
|
||||
with open(filename, "wb") as f:
|
||||
writer = csv.writer(f, delimiter=",")
|
||||
for node, secrets in self._secrets.items():
|
||||
secrets = [
|
||||
"%s:%s" % (timestamp, secret) for timestamp, secret in secrets
|
||||
]
|
||||
secrets.insert(0, node)
|
||||
writer.writerow(secrets)
|
||||
|
||||
def get(self, node):
|
||||
"""Return list of secrets for the given node."""
|
||||
return [secret for timestamp, secret in self._secrets[node]]
|
||||
|
||||
def add(self, node, size=256):
|
||||
"""Add a new randomly generated secret for the given node."""
|
||||
timestamp = str(int(time.time()))
|
||||
secret = binascii.b2a_hex(os.urandom(size))[:size]
|
||||
# The new secret *must* sort at the end of the list.
|
||||
# This forbids you from adding multiple secrets per second.
|
||||
try:
|
||||
if timestamp <= self._secrets[node][-1][0]:
|
||||
assert False, "You can only add one secret per second"
|
||||
except IndexError:
|
||||
pass
|
||||
self._secrets[node].append((timestamp, secret))
|
||||
|
||||
|
||||
class FixedSecrets(object):
|
||||
@ -127,354 +33,6 @@ class FixedSecrets(object):
|
||||
return []
|
||||
|
||||
|
||||
def resolve_name(name, package=None):
|
||||
"""Resolve dotted name into a python object.
|
||||
This function resolves a dotted name as a reference to a python object,
|
||||
returning whatever object happens to live at that path. It's a simple
|
||||
convenience wrapper around pyramid's DottedNameResolver.
|
||||
The optional argument 'package' specifies the package name for relative
|
||||
imports. If not specified, only absolute paths will be supported.
|
||||
"""
|
||||
return DottedNameResolver(package).resolve(name)
|
||||
|
||||
|
||||
def load_into_settings(filename, settings):
|
||||
"""Load config file contents into a Pyramid settings dict.
|
||||
This is a helper function for initialising a Pyramid settings dict from
|
||||
a config file. It flattens the config file sections into dotted settings
|
||||
names and updates the given dictionary in place.
|
||||
You would typically use this when constructing a Pyramid Configurator
|
||||
object, like so::
|
||||
def main(global_config, **settings):
|
||||
config_file = global_config['__file__']
|
||||
load_info_settings(config_file, settings)
|
||||
config = Configurator(settings=settings)
|
||||
"""
|
||||
filename = os.path.expandvars(os.path.expanduser(filename))
|
||||
filename = os.path.abspath(os.path.normpath(filename))
|
||||
config = Config(filename)
|
||||
|
||||
# Konfig keywords are added to every section when present, we have to
|
||||
# filter them out, otherwise plugin.load_from_config and
|
||||
# plugin.load_from_settings are unable to create instances.
|
||||
konfig_keywords = ["extends", "overrides"]
|
||||
|
||||
# Put values from the config file into the pyramid settings dict.
|
||||
for section in config.sections():
|
||||
setting_prefix = section.replace(":", ".")
|
||||
for name, value in config.get_map(section).items():
|
||||
if name not in konfig_keywords:
|
||||
settings[setting_prefix + "." + name] = value
|
||||
|
||||
# Store a reference to the Config object itself for later retrieval.
|
||||
settings["config"] = config
|
||||
return config
|
||||
|
||||
|
||||
def get_test_configurator(root, ini_file="tests.ini"):
|
||||
"""Find a file with testing settings, turn it into a configurator."""
|
||||
ini_dir = root
|
||||
while True:
|
||||
ini_path = os.path.join(ini_dir, ini_file)
|
||||
if os.path.exists(ini_path):
|
||||
break
|
||||
if ini_path == ini_file or ini_path == "/" + ini_file:
|
||||
raise RuntimeError("cannot locate " + ini_file)
|
||||
ini_dir = os.path.split(ini_dir)[0]
|
||||
# print("finding configurator for", ini_path)
|
||||
config = get_configurator({"__file__": ini_path})
|
||||
authz_policy = ACLAuthorizationPolicy()
|
||||
config.set_authorization_policy(authz_policy)
|
||||
authn_policy = TokenServerAuthenticationPolicy.from_settings(config.get_settings())
|
||||
config.set_authentication_policy(authn_policy)
|
||||
return config
|
||||
|
||||
|
||||
def get_configurator(global_config, **settings):
|
||||
"""Create a pyramid Configurator and populate it with sensible defaults.
|
||||
This function is a helper to create and pre-populate a Configurator
|
||||
object using the given paste-deploy settings dicts. It uses the
|
||||
mozsvc.config module to flatten the config paste-deploy config file
|
||||
into the settings dict so that non-mozsvc pyramid apps can read values
|
||||
from it easily.
|
||||
"""
|
||||
# Populate a SettingsDict with settings from the deployment file.
|
||||
settings = SettingsDict(settings)
|
||||
config_file = global_config.get("__file__")
|
||||
if config_file is not None:
|
||||
load_into_settings(config_file, settings)
|
||||
# Update with default pyramid settings, and then insert for all to use.
|
||||
config = Configurator(settings={})
|
||||
settings.setdefaults(config.registry.settings)
|
||||
config.registry.settings = settings
|
||||
return config
|
||||
|
||||
|
||||
def restore_env(*keys):
|
||||
"""Decorate a test to ensure os.environ gets restored after the call.
|
||||
|
||||
Given a list of environment variable keys, this decorator will save the
|
||||
current values of those environment variables at the start of the call
|
||||
and restore them to those values at the end.
|
||||
"""
|
||||
|
||||
def decorator(func):
|
||||
@functools.wraps(func)
|
||||
def wrapper(*args, **kwds):
|
||||
values = [os.environ.get(key) for key in keys]
|
||||
try:
|
||||
return func(*args, **kwds)
|
||||
finally:
|
||||
for key, value in zip(keys, values):
|
||||
if value is None:
|
||||
os.environ.pop(key, None)
|
||||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
class TestCase(unittest.TestCase):
|
||||
"""TestCase with some generic helper methods."""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up test fixtures."""
|
||||
super(TestCase, self).setUp()
|
||||
self.config = self.get_configurator()
|
||||
|
||||
def tearDown(self):
|
||||
"""Tear down test fixtures."""
|
||||
self.config.end()
|
||||
super(TestCase, self).tearDown()
|
||||
|
||||
def get_configurator(self):
|
||||
"""Load the configurator to use for the tests."""
|
||||
# Load config from the .ini file.
|
||||
# print("get_configurator", self, getattr(self, "TEST_INI_FILE", None))
|
||||
if not hasattr(self, "ini_file"):
|
||||
if hasattr(self, "TEST_INI_FILE"):
|
||||
self.ini_file = self.TEST_INI_FILE
|
||||
else:
|
||||
# The file to use may be specified in the environment.
|
||||
self.ini_file = os.environ.get("MOZSVC_TEST_INI_FILE", "tests.ini")
|
||||
__file__ = sys.modules[self.__class__.__module__].__file__
|
||||
config = get_test_configurator(__file__, self.ini_file)
|
||||
config.begin()
|
||||
return config
|
||||
|
||||
"""
|
||||
def make_request(self, *args, **kwds):
|
||||
config = kwds.pop("config", self.config)
|
||||
return make_request(config, *args, **kwds)
|
||||
"""
|
||||
|
||||
|
||||
class StorageTestCase(TestCase):
|
||||
"""TestCase class with automatic cleanup of database files."""
|
||||
|
||||
@restore_env("MOZSVC_TEST_INI_FILE")
|
||||
def setUp(self):
|
||||
"""Set up test fixtures with fresh environment variables."""
|
||||
# Put a fresh UUID into the environment.
|
||||
# This can be used in e.g. config files to create unique paths.
|
||||
os.environ["MOZSVC_UUID"] = str(uuid.uuid4())
|
||||
# Ensure a default sqluri if none is provided in the environment.
|
||||
# We use an in-memory sqlite db by default, except for tests that
|
||||
# explicitly require an on-disk file.
|
||||
if "MOZSVC_SQLURI" not in os.environ:
|
||||
os.environ["MOZSVC_SQLURI"] = "sqlite:///:memory:"
|
||||
if "MOZSVC_ONDISK_SQLURI" not in os.environ:
|
||||
ondisk_sqluri = os.environ["MOZSVC_SQLURI"]
|
||||
if ":memory:" in ondisk_sqluri:
|
||||
ondisk_sqluri = "sqlite:////tmp/tests-sync-%s.db"
|
||||
ondisk_sqluri %= (os.environ["MOZSVC_UUID"],)
|
||||
os.environ["MOZSVC_ONDISK_SQLURI"] = ondisk_sqluri
|
||||
# Allow subclasses to override default ini file.
|
||||
if hasattr(self, "TEST_INI_FILE"):
|
||||
if "MOZSVC_TEST_INI_FILE" not in os.environ:
|
||||
os.environ["MOZSVC_TEST_INI_FILE"] = self.TEST_INI_FILE
|
||||
super(StorageTestCase, self).setUp()
|
||||
|
||||
def tearDown(self):
|
||||
"""Tear down test fixtures and clean up databases."""
|
||||
self._cleanup_test_databases()
|
||||
# clear the pyramid threadlocals
|
||||
self.config.end()
|
||||
super(StorageTestCase, self).tearDown()
|
||||
del os.environ["MOZSVC_UUID"]
|
||||
|
||||
def get_configurator(self):
|
||||
"""Return the test configurator with storage settings applied."""
|
||||
config = super(StorageTestCase, self).get_configurator()
|
||||
# config.include("syncstorage")
|
||||
return config
|
||||
|
||||
def _cleanup_test_databases(self):
|
||||
"""Clean up any database used during the tests."""
|
||||
# Find and clean up any in-use databases
|
||||
for key, storage in self.config.registry.items():
|
||||
if not key.startswith("syncstorage:storage:"):
|
||||
continue
|
||||
while hasattr(storage, "storage"):
|
||||
storage = storage.storage
|
||||
# For server-based dbs, drop the tables to clear them.
|
||||
if storage.dbconnector.driver in ("mysql", "postgres"):
|
||||
with storage.dbconnector.connect() as c:
|
||||
c.execute("DROP TABLE bso")
|
||||
c.execute("DROP TABLE user_collections")
|
||||
c.execute("DROP TABLE collections")
|
||||
c.execute("DROP TABLE batch_uploads")
|
||||
c.execute("DROP TABLE batch_upload_items")
|
||||
# Explicitly free any pooled connections.
|
||||
storage.dbconnector.engine.dispose()
|
||||
# Find any sqlite database files and delete them.
|
||||
for key, value in self.config.registry.settings.items():
|
||||
if key.endswith(".sqluri"):
|
||||
sqluri = urlparse.urlparse(value)
|
||||
if sqluri.scheme == "sqlite" and ":memory:" not in value:
|
||||
if os.path.isfile(sqluri.path):
|
||||
os.remove(sqluri.path)
|
||||
|
||||
|
||||
class FunctionalTestCase(TestCase):
|
||||
"""TestCase for writing functional tests using WebTest.
|
||||
This TestCase subclass provides an easy mechanism to write functional
|
||||
tests using WebTest. It exposes a TestApp instance as self.app.
|
||||
If the environment variable MOZSVC_TEST_REMOTE is set to a URL, then
|
||||
self.app will be a WSGIProxy application that forwards all requests to
|
||||
that server. This allows the functional tests to be easily run against
|
||||
a live server instance.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up the functional test app and host URL."""
|
||||
super(FunctionalTestCase, self).setUp()
|
||||
|
||||
# now that we're testing against a rust server, we're always distant.
|
||||
# but some tests don't run if we're set to distant. so let's set
|
||||
# distant to false, figure out which tests we still want, and
|
||||
# delete the ones that don't work with distant = True along
|
||||
# with the need for self.distant.
|
||||
self.distant = False
|
||||
self.host_url = os.environ.get("SYNC_SERVER_URL", "http://localhost:8000")
|
||||
# This call implicitly commits the configurator. We probably still
|
||||
# want it for the side effects.
|
||||
self.config.make_wsgi_app()
|
||||
host_url = urlparse.urlparse(self.host_url)
|
||||
self.app = TestApp(
|
||||
self.host_url,
|
||||
extra_environ={
|
||||
"HTTP_HOST": host_url.netloc,
|
||||
"wsgi.url_scheme": host_url.scheme or "http",
|
||||
"SERVER_NAME": host_url.hostname,
|
||||
"REMOTE_ADDR": "127.0.0.1",
|
||||
"SCRIPT_NAME": host_url.path,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
class StorageFunctionalTestCase(FunctionalTestCase, StorageTestCase):
|
||||
"""Abstract base class for functional testing of a storage API."""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up storage functional test with authentication credentials."""
|
||||
super(StorageFunctionalTestCase, self).setUp()
|
||||
|
||||
# Generate userid and auth token crednentials.
|
||||
# This can be overridden by subclasses.
|
||||
self.config.commit()
|
||||
self._authenticate()
|
||||
|
||||
# Monkey-patch the app to sign all requests with the token.
|
||||
def new_do_request(req, *args, **kwds):
|
||||
hawkauthlib.sign_request(req, self.auth_token, self.auth_secret)
|
||||
return orig_do_request(req, *args, **kwds)
|
||||
|
||||
orig_do_request = self.app.do_request
|
||||
self.app.do_request = new_do_request
|
||||
|
||||
def basic_testing_authenticate(self):
|
||||
"""Authenticate using a random uid for basic testing."""
|
||||
# For basic testing, use a random uid and sign our own tokens.
|
||||
# Subclasses might like to override this and use a live tokenserver.
|
||||
pass
|
||||
|
||||
def _authenticate(self):
|
||||
policy = self.config.registry.getUtility(IAuthenticationPolicy)
|
||||
global_secret = os.environ.get("SYNC_MASTER_SECRET")
|
||||
if global_secret is not None:
|
||||
policy.secrets._secrets = [global_secret]
|
||||
self.user_id = random.randint(1, 100000)
|
||||
self.fxa_uid = "DECAFBAD" + str(uuid.uuid4().hex)[8:]
|
||||
self.hashed_fxa_uid = str(uuid.uuid4().hex)
|
||||
self.fxa_kid = "0000000000000-DECAFBAD" + str(uuid.uuid4().hex)[8:]
|
||||
auth_policy = self.config.registry.getUtility(IAuthenticationPolicy)
|
||||
req = Request.blank(self.host_url)
|
||||
creds = auth_policy.encode_hawk_id(
|
||||
req,
|
||||
self.user_id,
|
||||
extra={
|
||||
# Include a hashed_fxa_uid to trigger uid/kid extraction
|
||||
"hashed_fxa_uid": self.hashed_fxa_uid,
|
||||
"fxa_uid": self.fxa_uid,
|
||||
"fxa_kid": self.fxa_kid,
|
||||
},
|
||||
)
|
||||
self.auth_token, self.auth_secret = creds
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _switch_user(self):
|
||||
# It's hard to reliably switch users when testing a live server.
|
||||
if self.distant:
|
||||
raise unittest.SkipTest("Skipped when testing a live server")
|
||||
# Temporarily authenticate as a different user.
|
||||
orig_user_id = self.user_id
|
||||
orig_auth_token = self.auth_token
|
||||
orig_auth_secret = self.auth_secret
|
||||
try:
|
||||
# We loop because the userids are randomly generated,
|
||||
# so there's a small change we'll get the same one again.
|
||||
for retry_count in range(10):
|
||||
self._authenticate()
|
||||
if self.user_id != orig_user_id:
|
||||
break
|
||||
else:
|
||||
raise RuntimeError("Failed to switch to new user id")
|
||||
yield
|
||||
finally:
|
||||
self.user_id = orig_user_id
|
||||
self.auth_token = orig_auth_token
|
||||
self.auth_secret = orig_auth_secret
|
||||
|
||||
def _cleanup_test_databases(self):
|
||||
# Don't cleanup databases unless we created them ourselves.
|
||||
if not self.distant:
|
||||
super(StorageFunctionalTestCase, self)._cleanup_test_databases()
|
||||
|
||||
|
||||
MOCKMYID_DOMAIN = "mockmyid.s3-us-west-2.amazonaws.com"
|
||||
MOCKMYID_PRIVATE_KEY = None
|
||||
MOCKMYID_PRIVATE_KEY_DATA = {
|
||||
"algorithm": "DS",
|
||||
"x": "385cb3509f086e110c5e24bdd395a84b335a09ae",
|
||||
"y": "738ec929b559b604a232a9b55a5295afc368063bb9c20fac4e53a74970a4db795"
|
||||
"6d48e4c7ed523405f629b4cc83062f13029c4d615bbacb8b97f5e56f0c7ac9bc1"
|
||||
"d4e23809889fa061425c984061fca1826040c399715ce7ed385c4dd0d40225691"
|
||||
"2451e03452d3c961614eb458f188e3e8d2782916c43dbe2e571251ce38262",
|
||||
"p": "ff600483db6abfc5b45eab78594b3533d550d9f1bf2a992a7a8daa6dc34f8045a"
|
||||
"d4e6e0c429d334eeeaaefd7e23d4810be00e4cc1492cba325ba81ff2d5a5b305a"
|
||||
"8d17eb3bf4a06a349d392e00d329744a5179380344e82a18c47933438f891e22a"
|
||||
"eef812d69c8f75e326cb70ea000c3f776dfdbd604638c2ef717fc26d02e17",
|
||||
"q": "e21e04f911d1ed7991008ecaab3bf775984309c3",
|
||||
"g": "c52a4a0ff3b7e61fdf1867ce84138369a6154f4afa92966e3c827e25cfa6cf508b"
|
||||
"90e5de419e1337e07a2e9e2a3cd5dea704d175f8ebf6af397d69e110b96afb17c7"
|
||||
"a03259329e4829b0d03bbc7896b15b4ade53e130858cc34d96269aa89041f40913"
|
||||
"6c7242a38895c9d5bccad4f389af1d7a4bd1398bd072dffa896233397a",
|
||||
}
|
||||
|
||||
|
||||
class PermissiveNonceCache(object):
|
||||
"""Object for not really managing a cache of used nonce values.
|
||||
This class implements the timestamp/nonce checking interface required
|
||||
@ -499,12 +57,12 @@ class PermissiveNonceCache(object):
|
||||
return True
|
||||
|
||||
|
||||
@implementer(IAuthenticationPolicy)
|
||||
class TokenServerAuthenticationPolicy(HawkAuthenticationPolicy):
|
||||
"""Pyramid authentication policy for use with Tokenserver auth tokens.
|
||||
This class provides an IAuthenticationPolicy implementation based on
|
||||
the Mozilla TokenServer authentication tokens as described here:
|
||||
https://docs.services.mozilla.com/token/
|
||||
class TokenServerAuthenticationPolicy:
|
||||
"""Authentication policy for use with Tokenserver auth tokens.
|
||||
|
||||
This class provides token-based authentication using Mozilla Tokenserver
|
||||
authentication tokens as described. See our Tokenserver docs for more information.
|
||||
|
||||
For verification of token signatures, this plugin can use either a
|
||||
single fixed secret (via the argument 'secret') or a file mapping
|
||||
node hostnames to secrets (via the argument 'secrets_file'). The
|
||||
@ -528,62 +86,9 @@ class TokenServerAuthenticationPolicy(HawkAuthenticationPolicy):
|
||||
elif isinstance(secrets, (str, list)):
|
||||
secrets = FixedSecrets(secrets)
|
||||
elif isinstance(secrets, dict):
|
||||
secrets = resolve_name(secrets.pop("backend"))(**secrets)
|
||||
secrets = FixedSecrets(secrets.pop("secrets", []))
|
||||
self.secrets = secrets
|
||||
if kwds.get("nonce_cache") is None:
|
||||
kwds["nonce_cache"] = PermissiveNonceCache()
|
||||
super(TokenServerAuthenticationPolicy, self).__init__(**kwds)
|
||||
|
||||
@classmethod
|
||||
def _parse_settings(cls, settings):
|
||||
"""Parse settings for an instance of this class."""
|
||||
supercls = super(TokenServerAuthenticationPolicy, cls)
|
||||
kwds = supercls._parse_settings(settings)
|
||||
# collect leftover settings into a config for a Secrets object,
|
||||
# wtih some b/w compat for old-style secret-handling settings.
|
||||
secrets_prefix = "secrets."
|
||||
secrets = {}
|
||||
if "secrets_file" in settings:
|
||||
if "secret" in settings:
|
||||
raise ValueError("can't use both 'secret' and 'secrets_file'")
|
||||
secrets["backend"] = "tools.integration_tests.test_support.Secrets"
|
||||
secrets["filename"] = settings.pop("secrets_file")
|
||||
elif "secret" in settings:
|
||||
secrets["backend"] = "tools.integration_tests.test_support.FixedSecrets"
|
||||
secrets["secrets"] = settings.pop("secret")
|
||||
for name in settings.keys():
|
||||
if name.startswith(secrets_prefix):
|
||||
secrets[name[len(secrets_prefix) :]] = settings.pop(name)
|
||||
kwds["secrets"] = secrets
|
||||
return kwds
|
||||
|
||||
def decode_hawk_id(self, request, tokenid):
|
||||
"""Decode a Hawk token id into its userid and secret key.
|
||||
This method determines the appropriate secrets to use for the given
|
||||
request, then passes them on to tokenlib to handle the given Hawk
|
||||
token.
|
||||
If the id is invalid then ValueError will be raised.
|
||||
"""
|
||||
# There might be multiple secrets in use, if we're in the
|
||||
# process of transitioning from one to another. Try each
|
||||
# until we find one that works.
|
||||
node_name = self._get_node_name(request)
|
||||
secrets = self._get_token_secrets(node_name)
|
||||
for secret in secrets:
|
||||
try:
|
||||
data = tokenlib.parse_token(tokenid, secret=secret)
|
||||
userid = data["uid"]
|
||||
token_node_name = data["node"]
|
||||
if token_node_name != node_name:
|
||||
raise ValueError("incorrect node for this token")
|
||||
key = tokenlib.get_derived_secret(tokenid, secret=secret)
|
||||
break
|
||||
except (ValueError, KeyError):
|
||||
pass
|
||||
else:
|
||||
print("warn: Authentication Failed: invalid hawk id")
|
||||
raise ValueError("invalid Hawk id")
|
||||
return userid, key
|
||||
self.nonce_cache = kwds.get("nonce_cache") or PermissiveNonceCache()
|
||||
|
||||
def encode_hawk_id(self, request, userid, extra=None):
|
||||
"""Encode the given userid into a Hawk id and secret key.
|
||||
@ -619,192 +124,3 @@ class TokenServerAuthenticationPolicy(HawkAuthenticationPolicy):
|
||||
if self.secrets is None:
|
||||
return [None]
|
||||
return self.secrets.get(node_name)
|
||||
|
||||
|
||||
@implementer(IAuthenticationPolicy)
|
||||
class SyncStorageAuthenticationPolicy(TokenServerAuthenticationPolicy):
|
||||
"""Pyramid authentication policy with special handling of expired tokens.
|
||||
|
||||
This class extends the standard mozsvc TokenServerAuthenticationPolicy
|
||||
to (carefully) allow some access by holders of expired tokens. Presenting
|
||||
an expired token will result in a principal of "expired:<uid>" rather than
|
||||
just "<uid>", allowing this case to be specially detected and handled for
|
||||
some resources without interfering with the usual authentication rules.
|
||||
"""
|
||||
|
||||
def __init__(self, secrets=None, **kwds):
|
||||
self.expired_token_timeout = kwds.pop("expired_token_timeout", None)
|
||||
if self.expired_token_timeout is None:
|
||||
self.expired_token_timeout = 300
|
||||
super(SyncStorageAuthenticationPolicy, self).__init__(secrets, **kwds)
|
||||
|
||||
@classmethod
|
||||
def _parse_settings(cls, settings):
|
||||
"""Parse settings for an instance of this class."""
|
||||
supercls = super(SyncStorageAuthenticationPolicy, cls)
|
||||
kwds = supercls._parse_settings(settings)
|
||||
expired_token_timeout = settings.pop("expired_token_timeout", None)
|
||||
if expired_token_timeout is not None:
|
||||
kwds["expired_token_timeout"] = int(expired_token_timeout)
|
||||
return kwds
|
||||
|
||||
def decode_hawk_id(self, request, tokenid):
|
||||
"""Decode a Hawk token id into its userid and secret key.
|
||||
|
||||
This method determines the appropriate secrets to use for the given
|
||||
request, then passes them on to tokenlib to handle the given Hawk
|
||||
token. If the id is invalid then ValueError will be raised.
|
||||
|
||||
Unlike the superclass method, this implementation allows expired
|
||||
tokens to be used up to a configurable timeout. The effective userid
|
||||
for expired tokens is changed to be "expired:<uid>".
|
||||
"""
|
||||
now = time.time()
|
||||
node_name = self._get_node_name(request)
|
||||
# There might be multiple secrets in use,
|
||||
# so try each until we find one that works.
|
||||
secrets = self._get_token_secrets(node_name)
|
||||
for secret in secrets:
|
||||
try:
|
||||
tm = tokenlib.TokenManager(secret=secret)
|
||||
# Check for a proper valid signature first.
|
||||
# If that failed because of an expired token, check if
|
||||
# it falls within the allowable expired-token window.
|
||||
try:
|
||||
data = self._parse_token(tm, tokenid, now)
|
||||
userid = data["uid"]
|
||||
except tokenlib.errors.ExpiredTokenError:
|
||||
recently = now - self.expired_token_timeout
|
||||
data = self._parse_token(tm, tokenid, recently)
|
||||
# We replace the uid with a special string to ensure that
|
||||
# calling code doesn't accidentally treat the token as
|
||||
# valid. If it wants to use the expired uid, it will have
|
||||
# to explicitly dig it back out from `request.user`.
|
||||
data["expired_uid"] = data["uid"]
|
||||
userid = data["uid"] = "expired:%d" % (data["uid"],)
|
||||
except tokenlib.errors.InvalidSignatureError:
|
||||
# Token signature check failed, try the next secret.
|
||||
continue
|
||||
except TypeError as e:
|
||||
# Something went wrong when validating the contained data.
|
||||
raise ValueError(str(e))
|
||||
else:
|
||||
# Token signature check succeeded, quit the loop.
|
||||
break
|
||||
else:
|
||||
# The token failed to validate using any secret.
|
||||
print("warn Authentication Failed: invalid hawk id")
|
||||
raise ValueError("invalid Hawk id")
|
||||
|
||||
# Let the app access all user data from the token.
|
||||
request.user.update(data)
|
||||
request.metrics["metrics_uid"] = data.get("hashed_fxa_uid")
|
||||
request.metrics["metrics_device_id"] = data.get("hashed_device_id")
|
||||
|
||||
# Sanity-check that we're on the right node.
|
||||
if data["node"] != node_name:
|
||||
msg = "incorrect node for this token: %s"
|
||||
raise ValueError(msg % (data["node"],))
|
||||
|
||||
# Calculate the matching request-signing secret.
|
||||
key = tokenlib.get_derived_secret(tokenid, secret=secret)
|
||||
|
||||
return userid, key
|
||||
|
||||
def encode_hawk_id(self, request, userid, extra=None):
|
||||
"""Encode the given userid into a Hawk id and secret key.
|
||||
|
||||
This method is essentially the reverse of decode_hawk_id. It is
|
||||
not needed for consuming authentication tokens, but is very useful
|
||||
when building them for testing purposes.
|
||||
|
||||
Unlike its superclass method, this one allows the caller to specify
|
||||
a dict of additional user data to include in the auth token.
|
||||
"""
|
||||
node_name = self._get_node_name(request)
|
||||
secret = self._get_token_secrets(node_name)[-1]
|
||||
data = {"uid": userid, "node": node_name}
|
||||
if extra is not None:
|
||||
data.update(extra)
|
||||
tokenid = tokenlib.make_token(data, secret=secret)
|
||||
key = tokenlib.get_derived_secret(tokenid, secret=secret)
|
||||
return tokenid, key
|
||||
|
||||
def _parse_token(self, tokenmanager, tokenid, now):
|
||||
"""Parse, validate and normalize user data from a tokenserver token.
|
||||
|
||||
This is a thin wrapper around tokenmanager.parse_token to apply
|
||||
some extra validation to the contained user data. The data is
|
||||
signed and trusted, but it's still coming from outside the system
|
||||
so it's good defense-in-depth to validate it at our app boundary.
|
||||
|
||||
We also deal with some historical baggage by renaming fields
|
||||
as needed.
|
||||
"""
|
||||
data = tokenmanager.parse_token(tokenid, now=now)
|
||||
user = {}
|
||||
|
||||
# It should always contain an integer userid.
|
||||
try:
|
||||
user["uid"] = data["uid"]
|
||||
except KeyError:
|
||||
raise ValueError("missing uid in token data")
|
||||
else:
|
||||
if not isinstance(user["uid"], int) or user["uid"] < 0:
|
||||
raise ValueError("invalid uid in token data")
|
||||
|
||||
# It should always contain a string node name.
|
||||
try:
|
||||
user["node"] = data["node"]
|
||||
except KeyError:
|
||||
raise ValueError("missing node in token data")
|
||||
else:
|
||||
if not isinstance(user["node"], str):
|
||||
raise ValueError("invalid node in token data")
|
||||
|
||||
# It might contain additional user identifiers for
|
||||
# storage and metrics purposes.
|
||||
#
|
||||
# There's some historical baggage here.
|
||||
#
|
||||
# Old versions of tokenserver would send a hashed "metrics uid" as the
|
||||
# "fxa_uid" key, attempting a small amount of anonymization. Newer
|
||||
# versions of tokenserver send the raw uid as "fxa_uid" and the hashed
|
||||
# version as "hashed_fxa_uid". The raw version may be used associating
|
||||
# stored data with a specific user, but the hashed version is the one
|
||||
# that we want for metrics.
|
||||
|
||||
if "hashed_fxa_uid" in data:
|
||||
user["hashed_fxa_uid"] = data["hashed_fxa_uid"]
|
||||
if not VALID_FXA_ID_REGEX.match(user["hashed_fxa_uid"]):
|
||||
raise ValueError("invalid hashed_fxa_uid in token data")
|
||||
try:
|
||||
user["fxa_uid"] = data["fxa_uid"]
|
||||
except KeyError:
|
||||
raise ValueError("missing fxa_uid in token data")
|
||||
else:
|
||||
if not VALID_FXA_ID_REGEX.match(user["fxa_uid"]):
|
||||
raise ValueError("invalid fxa_uid in token data")
|
||||
try:
|
||||
user["fxa_kid"] = data["fxa_kid"]
|
||||
except KeyError:
|
||||
raise ValueError("missing fxa_kid in token data")
|
||||
else:
|
||||
if not VALID_FXA_ID_REGEX.match(user["fxa_kid"]):
|
||||
raise ValueError("invalid fxa_kid in token data")
|
||||
elif "fxa_uid" in data:
|
||||
user["hashed_fxa_uid"] = data["fxa_uid"]
|
||||
if not VALID_FXA_ID_REGEX.match(user["hashed_fxa_uid"]):
|
||||
raise ValueError("invalid fxa_uid in token data")
|
||||
|
||||
if "hashed_device_id" in data:
|
||||
user["hashed_device_id"] = data["hashed_device_id"]
|
||||
if not VALID_FXA_ID_REGEX.match(user["hashed_device_id"]):
|
||||
raise ValueError("invalid hashed_device_id in token data")
|
||||
"""
|
||||
elif "device_id" in data:
|
||||
user["hashed_device_id"] = data.get("device_id")
|
||||
if not VALID_FXA_ID_REGEX.match(user["hashed_device_id"]):
|
||||
raise ValueError("invalid device_id in token data")
|
||||
"""
|
||||
return user
|
||||
|
||||
@ -1,26 +0,0 @@
|
||||
[server:main]
|
||||
use = egg:Paste#http
|
||||
host = 0.0.0.0
|
||||
port = 5000
|
||||
|
||||
[app:main]
|
||||
use = egg:SyncStorage
|
||||
|
||||
[storage]
|
||||
backend = syncstorage.storage.sql.SQLStorage
|
||||
sqluri = ${MOZSVC_SQLURI}
|
||||
standard_collections = true
|
||||
quota_size = 5242880
|
||||
pool_size = 100
|
||||
pool_recycle = 3600
|
||||
reset_on_return = true
|
||||
create_tables = true
|
||||
max_post_records = 4000
|
||||
batch_upload_enabled = true
|
||||
force_consistent_sort_order = true
|
||||
|
||||
[hawkauth]
|
||||
secret = "TED KOPPEL IS A ROBOT"
|
||||
|
||||
[endpoints]
|
||||
syncstorage-rs = http://localhost:8000/1.5/1
|
||||
@ -1,43 +1,51 @@
|
||||
"""Mock FxA OAuth server for integration testing."""
|
||||
|
||||
from wsgiref.simple_server import make_server as _make_server
|
||||
from pyramid.config import Configurator
|
||||
from pyramid.response import Response
|
||||
from pyramid.view import view_config
|
||||
import json
|
||||
import os
|
||||
from wsgiref.simple_server import make_server as _make_server
|
||||
|
||||
|
||||
@view_config(route_name="mock_oauth_verify", renderer="json")
|
||||
def _mock_oauth_verify(request):
|
||||
body = json.loads(request.json_body["token"])
|
||||
|
||||
return Response(
|
||||
json=body["body"], content_type="application/json", status=body["status"]
|
||||
)
|
||||
def _mock_oauth_verify(environ, start_response):
|
||||
try:
|
||||
length = int(environ.get("CONTENT_LENGTH") or 0)
|
||||
body = json.loads(environ["wsgi.input"].read(length))
|
||||
payload = json.loads(body["token"])
|
||||
status = "%d OK" % payload["status"]
|
||||
response_body = json.dumps(payload["body"]).encode()
|
||||
except Exception as exc:
|
||||
status = "400 Bad Request"
|
||||
response_body = json.dumps({"error": str(exc)}).encode()
|
||||
start_response(status, [("Content-Type", "application/json")])
|
||||
return [response_body]
|
||||
|
||||
|
||||
# The PyFxA OAuth client makes a request to the FxA OAuth server for its
|
||||
# current public RSA key. While the client allows us to pass in a JWK to
|
||||
# prevent this request from happening, mocking the endpoint is simpler.
|
||||
@view_config(route_name="mock_oauth_jwk", renderer="json")
|
||||
def _mock_oauth_jwk(request):
|
||||
return {"keys": [{"fake": "RSA key"}]}
|
||||
def _mock_oauth_jwk(environ, start_response):
|
||||
# The PyFxA OAuth client makes a request to the FxA OAuth server for its
|
||||
# current public RSA key. While the client allows us to pass in a JWK to
|
||||
# prevent this request from happening, mocking the endpoint is simpler.
|
||||
response_body = json.dumps({"keys": [{"fake": "RSA key"}]}).encode()
|
||||
start_response("200 OK", [("Content-Type", "application/json")])
|
||||
return [response_body]
|
||||
|
||||
|
||||
_ROUTES = {
|
||||
"/v1/verify": _mock_oauth_verify,
|
||||
"/v1/jwks": _mock_oauth_jwk,
|
||||
}
|
||||
|
||||
|
||||
def _app(environ, start_response):
|
||||
path = environ.get("PATH_INFO", "")
|
||||
handler = _ROUTES.get(path)
|
||||
if handler is None:
|
||||
start_response("404 Not Found", [("Content-Type", "application/json")])
|
||||
return [json.dumps({"error": "not found"}).encode()]
|
||||
return handler(environ, start_response)
|
||||
|
||||
|
||||
def make_server(host, port):
|
||||
"""Create and return a mock FxA OAuth WSGI server bound to host and port."""
|
||||
with Configurator() as config:
|
||||
config.add_route("mock_oauth_verify", "/v1/verify")
|
||||
config.add_view(
|
||||
_mock_oauth_verify, route_name="mock_oauth_verify", renderer="json"
|
||||
)
|
||||
|
||||
config.add_route("mock_oauth_jwk", "/v1/jwks")
|
||||
config.add_view(_mock_oauth_jwk, route_name="mock_oauth_jwk", renderer="json")
|
||||
app = config.make_wsgi_app()
|
||||
|
||||
return _make_server(host, port, app)
|
||||
return _make_server(host, port, _app)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
1105
tools/tokenserver/poetry.lock
generated
1105
tools/tokenserver/poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@ -17,7 +17,6 @@ package-mode = false
|
||||
boto = "2.49.0"
|
||||
hawkauthlib = "2.0.0"
|
||||
mysqlclient = "^2.1.8"
|
||||
pyramid = "^1.10.8"
|
||||
sqlalchemy = "^1.4.46"
|
||||
testfixtures = "^8.3.0"
|
||||
tokenlib = "2.0.0"
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user