Sentry 监控 - Snuba 数据中台本地开发环境配置实战
本文转载自微信公众号「黑客下午茶」,监控据中境配作者为少 。台本转载本文请联系黑客下午茶公众号。地开
克隆仓库
分别克隆 getsentry/sentry 与 getsentry/snuba:
git clone https://github.com/getsentry/sentry.git
git clone https://github.com/getsentry/snuba.git
安装系统依赖(以 Mac 为例)
Xcode CLI tools
xcode-select --installBrewfile
进入 sentry 文件夹,发环你会看到一个 Brewfile 文件:
cd sentryBrewfile
# required to run devservices cask docker brew pyenv # required for pyenvs python-build brew openssl brew readline # required for yarn test -u brew watchman # required to build some of sentrys dependencies brew pkgconfig brew libxslt brew libxmlsec1 brew geoip # Currently needed because on Big Sur theres no wheel for it brew librdkafka # direnv isnt defined here,置实战 because we have it configured to check for a bootstrapped environment. # If its installed in the early steps of the setup process, it just leads to confusion. # brew direnv tap homebrew/cask # required for acceptance testing cask chromedriver如果你本地已经安装了 Docker Desktop 并且已经启动,可以把 cask docker 注释掉。监控据中境配
接下来,台本运行:
brew bundle --verbose如果你之前本地没有 Docker Desktop,地开则还需要手动启动一下它:
open -g -a Docker.app构建工具链
Sentry 依赖于 Python Wheels(包含二进制扩展模块的发环包),官方为以下平台分发:
Linux 兼容 PEP-513 (manylinux1) macOS 10.15 或更高版本如果您的置实战开发机器没有运行上述系统之一,则需要安装 Rust 工具链。监控据中境配按照 https://www.rust-lang.org/tools/install 上的台本说明安装编译器和相关工具。安装后,地开Sentry 安装程序将自动使用 Rust 构建所有二进制模块,发环无需额外配置。置实战
官方通常会跟踪最新的稳定 Rust 版本,服务器托管该版本每六周更新一次。因此,请确保通过偶尔运行来使您的 Rust 工具链保持最新:
rustup update stablePython
Sentry 使用 pyenv 来安装和管理 Python 版本。它是在您运行 brew bundle 时安装的。
要安装所需版本的 Python,您需要运行以下命令。这将需要一段时间,因为您的计算机实际上正在编译 Python!
make setup-pyenv这里假设你是 Zsh 用户。
如果您键入 which python,您应该看到类似 $HOME/.pyenv/shims/python 而不是 /usr/bin/python 的内容。这是亿华云计算因为以下内容已添加到您的启动脚本中:
cat ~/.zprofile,你会看到如下内容:
# MacPorts Installer addition on 2021-10-20_at_11:48:22: adding an appropriate PATH variable for use with MacPorts. export PATH="/opt/local/bin:/opt/local/sbin:$PATH" # Finished adapting your PATH environment variable for use with MacPorts. # It is assumed that pyenv is installed via Brew, so this is all we need to do. eval "$(pyenv init --path)"虚拟环境
您现在已准备好创建 Python 虚拟环境。运行:
python -m venv .venv并激活虚拟环境:
source .venv/bin/activate如果一切正常,运行 which python 现在应该会导致类似 /Users/you/sentry/.venv/bin/python 的结果。
Snuba 配置实战
启动 Snuba 相关依赖项容器
cd ../sentry git checkout master git pull source .venv/bin/activate sentry devservices up --exclude=snuba # 11:17:59 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured. # 11:18:01 [INFO] sentry.plugins.github: apps-not-configured # > Pulling image postgres:9.6-alpine # > Pulling image yandex/clickhouse-server:20.3.9.70 # > Not starting container sentry_relay because it should be started on-demand with devserver. # > Creating sentry_redis volume # > Creating sentry_zookeeper_6 volume # > Creating sentry_kafka_6 volume # > Creating container sentry_redis # > Creating container sentry_zookeeper # > Creating container sentry_kafka # > Starting container sentry_redis (listening: (127.0.0.1, 6379)) # > Starting container sentry_kafka (listening: (127.0.0.1, 9092)) # > Starting container sentry_zookeeper # > Creating sentry_clickhouse volume # > Creating container sentry_clickhouse # > Creating sentry_postgres volume # > Creating sentry_wal2json volume # > Starting container sentry_clickhouse (listening: (127.0.0.1, 9000), (127.0.0.1, 9009), (127.0.0.1, 8123)) # > Creating container sentry_postgres # > Starting container sentry_postgres (listening: (127.0.0.1, 5432))这将在 master 上获取最新版本的 Sentry,并调出所有 snuba 的依赖项。
Snuba 主要依赖 clickhouse,zookeeper,kafka,redis 相关容器。
docker ps 查看一下:
1149a6f6ff23 postgres:9.6-alpine "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 127.0.0.1:5432->5432/tcp sentry_postgres a7f3af7d52bb yandex/clickhouse-server:20.3.9.70 "/entrypoint.sh" 3 minutes ago Up 3 minutes 127.0.0.1:8123->8123/tcp, 127.0.0.1:9000->9000/tcp, 127.0.0.1:9009->9009/tcp sentry_clickhouse 68913ee15c43 confluentinc/cp-zookeeper:6.2.0 "/etc/confluent/dock…" 3 minutes ago Up 3 minutes 2181/tcp, 2888/tcp, 3888/tcp sentry_zookeeper 5a248eb26ed3 confluentinc/cp-kafka:6.2.0 "/etc/confluent/dock…" 3 minutes ago Up 3 minutes 127.0.0.1:9092->9092/tcp sentry_kafka 0573aff7b5af redis:5.0-alpine "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 127.0.0.1:6379->6379/tcp sentry_redis设置 Snuba 虚拟环境
cd snuba make pyenv-setup python -m venv .venv source .venv/bin/activate pip install --upgrade pip==21.1.3 make develop查看迁移列表
snuba migrations list # system # [ ] 0001_migrations # # events # [ ] 0001_events_initial # [ ] 0002_events_onpremise_compatibility # [ ] 0003_errors # [ ] 0004_errors_onpremise_compatibility # [ ] 0005_events_tags_hash_map (blocking) # [ ] 0006_errors_tags_hash_map (blocking) # [ ] 0007_groupedmessages # [ ] 0008_groupassignees # [ ] 0009_errors_add_http_fields # [ ] 0010_groupedmessages_onpremise_compatibility (blocking) # [ ] 0011_rebuild_errors # [ ] 0012_errors_make_level_nullable # [ ] 0013_errors_add_hierarchical_hashes # [ ] 0014_backfill_errors (blocking) # [ ] 0015_truncate_events # # transactions # [ ] 0001_transactions # [ ] 0002_transactions_onpremise_fix_orderby_and_partitionby (blocking) # [ ] 0003_transactions_onpremise_fix_columns (blocking) # [ ] 0004_transactions_add_tags_hash_map (blocking) # [ ] 0005_transactions_add_measurements # [ ] 0006_transactions_add_http_fields # [ ] 0007_transactions_add_discover_cols # [ ] 0008_transactions_add_timestamp_index # [ ] 0009_transactions_fix_title_and_message # [ ] 0010_transactions_nullable_trace_id # [ ] 0011_transactions_add_span_op_breakdowns # [ ] 0012_transactions_add_spans # # discover # [ ] 0001_discover_merge_table # [ ] 0002_discover_add_deleted_tags_hash_map # [ ] 0003_discover_fix_user_column # [ ] 0004_discover_fix_title_and_message # [ ] 0005_discover_fix_transaction_name # [ ] 0006_discover_add_trace_id # [ ] 0007_discover_add_span_id # # outcomes # [ ] 0001_outcomes # [ ] 0002_outcomes_remove_size_and_bytes # [ ] 0003_outcomes_add_category_and_quantity # [ ] 0004_outcomes_matview_additions (blocking) # # metrics # [ ] 0001_metrics_buckets # [ ] 0002_metrics_sets # [ ] 0003_counters_to_buckets # [ ] 0004_metrics_counters # [ ] 0005_metrics_distributions_buckets # [ ] 0006_metrics_distributions # [ ] 0007_metrics_sets_granularity_10 # [ ] 0008_metrics_counters_granularity_10 # [ ] 0009_metrics_distributions_granularity_10 # [ ] 0010_metrics_sets_granularity_1h # [ ] 0011_metrics_counters_granularity_1h # [ ] 0012_metrics_distributions_granularity_1h # [ ] 0013_metrics_sets_granularity_1d # [ ] 0014_metrics_counters_granularity_1d # [ ] 0015_metrics_distributions_granularity_1d # # sessions # [ ] 0001_sessions # [ ] 0002_sessions_aggregates # [ ] 0003_sessions_matview运行迁移
snuba migrations migrate --force # ...... # 2021-12-01 19:45:57,557 Running migration: 0014_metrics_counters_granularity_1d # 2021-12-01 19:45:57,575 Finished: 0014_metrics_counters_granularity_1d # 2021-12-01 19:45:57,589 Running migration: 0015_metrics_distributions_granularity_1d # 2021-12-01 19:45:57,610 Finished: 0015_metrics_distributions_granularity_1d # 2021-12-01 19:45:57,623 Running migration: 0001_sessions # 2021-12-01 19:45:57,656 Finished: 0001_sessions # 2021-12-01 19:45:57,669 Running migration: 0002_sessions_aggregates # 2021-12-01 19:45:57,770 Finished: 0002_sessions_aggregates # 2021-12-01 19:45:57,792 Running migration: 0003_sessions_matview # 2021-12-01 19:45:57,849 Finished: 0003_sessions_matview # Finished running migrations检查迁移
进入 Clickhouse 容器:
docker exec -it sentry_clickhouse clickhouse-client # 运行如下 `sql` 语句: select count() from sentry_local # ClickHouse client version 20.3.9.70 (official build). # Connecting to localhost:9000 as user default. # Connected to ClickHouse server version 20.3.9 revision 54433. # a7f3af7d52bb :) select count() from sentry_local # SELECT count() # FROM sentry_local # ┌─count()─┐ # │ 0 │ # └─────────┘ # 1 rows in set. Elapsed: 0.008 sec. # a7f3af7d52bb :)查看相关实体数据集
snuba entities list # Declared Entities: # discover # events # groups # groupassignee # groupedmessage # metrics_sets # metrics_counters # metrics_distributions # outcomes # outcomes_raw # sessions # org_sessions # spans # transactions # discover_transactions # discover_events启动开发服务器
此命令将启动 api 和所有 Snuba 消费者以从 Kafka 摄取数据:
snuba devserver转到 http://localhost:1218/events/snql,你将会看到一个简易的查询 UI。