main
mula.liu 2025-12-02 14:29:38 +08:00
commit 5f3cb8beeb
107 changed files with 9811 additions and 0 deletions

29
.env.example 100644
View File

@ -0,0 +1,29 @@
# Application Settings
APP_NAME=Cosmo - Deep Space Explorer
API_PREFIX=/api
# CORS Settings (comma-separated list)
CORS_ORIGINS=http://localhost:5173,http://localhost:3000
# Cache Settings
CACHE_TTL_DAYS=3
# Database Settings (PostgreSQL)
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_NAME=cosmo_db
DATABASE_USER=postgres
DATABASE_PASSWORD=postgres
DATABASE_POOL_SIZE=20
DATABASE_MAX_OVERFLOW=10
# Redis Settings
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD=
REDIS_MAX_CONNECTIONS=50
# File Upload Settings
UPLOAD_DIR=upload
MAX_UPLOAD_SIZE=10485760 # 10MB in bytes

47
.gitignore vendored 100644
View File

@ -0,0 +1,47 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
env/
venv/
ENV/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Environment
.env
.venv
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db
# Logs
*.log
# Testing
.pytest_cache/
.coverage
htmlcov/

121
ADMIN_PROGRESS.md 100644
View File

@ -0,0 +1,121 @@
# 后台管理系统 - 进度报告
## 已完成的工作
### 1. 数据库设计和初始化 ✅
#### 创建的数据库表:
- **users** - 用户表
- id (主键)
- username (用户名,唯一)
- password_hash (密码哈希)
- email (邮箱)
- full_name (全名)
- is_active (激活状态)
- created_at, updated_at, last_login_at (时间戳)
- **roles** - 角色表
- id (主键)
- name (角色名,如 'admin', 'user')
- display_name (显示名称)
- description (描述)
- created_at, updated_at
- **user_roles** - 用户-角色关联表 (多对多)
- user_id, role_id (复合主键)
- created_at
- **menus** - 菜单表
- id (主键)
- parent_id (父菜单ID支持树形结构)
- name (菜单名)
- title (显示标题)
- icon (图标名)
- path (路由路径)
- component (组件路径)
- sort_order (排序)
- is_active (激活状态)
- description (描述)
- created_at, updated_at
- **role_menus** - 角色-菜单关联表
- id (主键)
- role_id, menu_id
- created_at
### 2. 初始化数据 ✅
#### 角色数据:
- **admin** - 管理员角色(拥有所有权限)
- **user** - 普通用户角色(基本访问权限)
#### 管理员用户:
- 用户名:`cosmo`
- 密码:`cosmo`
- 邮箱admin@cosmo.com
- 角色admin
#### 菜单结构:
```
├── 控制台 (/admin/dashboard)
└── 数据管理 (父菜单)
├── 天体数据列表 (/admin/celestial-bodies)
├── 静态数据列表 (/admin/static-data)
└── NASA数据下载管理 (/admin/nasa-data)
```
### 3. 代码文件
#### 数据库模型 (ORM)
- `/backend/app/models/db/user.py` - 用户模型
- `/backend/app/models/db/role.py` - 角色模型
- `/backend/app/models/db/menu.py` - 菜单模型
#### 脚本
- `/backend/scripts/seed_admin.py` - 初始化管理员数据的脚本
#### 依赖
- 新增 `bcrypt==5.0.0` 用于密码哈希
### 4. 执行的脚本
```bash
# 1. 创建数据库表
./venv/bin/python scripts/init_db.py
# 2. 初始化管理员数据
./venv/bin/python scripts/seed_admin.py
```
## 数据库表关系
```
users ←→ user_roles ←→ roles
role_menus
menus (支持父子关系)
```
## 下一步工作
根据用户要求,后续需要实现:
1. **后台管理系统 - 天体数据列表**
- API接口CRUD操作
- 前端页面:列表、编辑、新增
2. **后台管理系统 - 静态数据列表**
- API接口管理星座、星系等静态数据
- 前端页面:分类管理
3. **后台管理系统 - NASA数据下载管理**
- API接口查看下载历史、触发数据更新
- 前端页面:数据下载状态监控
## 注意事项
- 所有密码使用 bcrypt 加密存储
- 菜单系统支持无限层级(通过 parent_id
- 角色-菜单权限通过 role_menus 表控制
- 当前已创建管理员用户,可直接登录测试

239
CONFIG.md 100644
View File

@ -0,0 +1,239 @@
# Cosmo 后端配置说明
## 配置文件结构
```
backend/
├── .env # 实际配置文件(不提交到 Git
├── .env.example # 配置模板(提交到 Git
├── app/
│ └── config.py # 配置管理Pydantic Settings
└── scripts/
├── create_db.py # 创建数据库
├── init_db.py # 初始化表结构
└── setup.sh # 一键初始化脚本
```
## 配置项说明
### 1. PostgreSQL 数据库配置
```bash
DATABASE_HOST=localhost # 数据库主机
DATABASE_PORT=5432 # 数据库端口
DATABASE_NAME=cosmo_db # 数据库名称
DATABASE_USER=postgres # 数据库用户名
DATABASE_PASSWORD=postgres # 数据库密码
DATABASE_POOL_SIZE=20 # 连接池大小
DATABASE_MAX_OVERFLOW=10 # 连接池最大溢出数
```
**默认配置**
- 账号和密码一致:`postgres/postgres`
- 本地数据库:`localhost:5432`
- 数据库名称:`cosmo_db`
### 2. Redis 缓存配置
```bash
REDIS_HOST=localhost # Redis 主机
REDIS_PORT=6379 # Redis 端口
REDIS_DB=0 # Redis 数据库编号0-15
REDIS_PASSWORD= # Redis 密码(留空表示无密码)
REDIS_MAX_CONNECTIONS=50 # 最大连接数
```
**默认配置**
- 本地 Redis`localhost:6379`
- 无密码认证
- 使用 0 号数据库
### 3. 应用配置
```bash
APP_NAME=Cosmo - Deep Space Explorer
API_PREFIX=/api
CORS_ORIGINS=["*"] # 开发环境允许所有来源
CACHE_TTL_DAYS=3 # NASA API 缓存天数
```
### 4. 文件上传配置
```bash
UPLOAD_DIR=upload # 上传目录
MAX_UPLOAD_SIZE=10485760 # 最大文件大小10MB
```
## 快速开始
### 1. 确保服务运行
确保本机已安装并启动 PostgreSQL 和 Redis
```bash
# 检查 PostgreSQL
psql -U postgres -c "SELECT version();"
# 检查 Redis
redis-cli ping # 应返回 PONG
```
### 2. 配置环境变量
配置文件 `.env` 已经创建好了,默认配置如下:
- PostgreSQL: `postgres/postgres@localhost:5432/cosmo_db`
- Redis: `localhost:6379`(无密码)
如需修改,直接编辑 `backend/.env` 文件。
### 3. 安装依赖
```bash
cd backend
pip install -r requirements.txt
```
### 4. 初始化数据库
```bash
# 方式一:使用一键脚本(推荐)
chmod +x scripts/setup.sh
./scripts/setup.sh
# 方式二:手动执行
python scripts/create_db.py # 创建数据库
python scripts/init_db.py # 初始化表结构
```
### 5. 启动服务
```bash
# 开发模式(自动重载)
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
# 或者直接运行
python app/main.py
```
访问:
- API 文档http://localhost:8000/docs
- 健康检查http://localhost:8000/health
## 配置验证
启动服务后,访问健康检查端点验证配置:
```bash
curl http://localhost:8000/health
```
正常响应示例:
```json
{
"status": "healthy",
"redis": {
"connected": true,
"used_memory_human": "1.2M",
"connected_clients": 2
},
"database": "connected"
}
```
## 常见问题
### PostgreSQL 连接失败
**问题**`Connection refused` 或 `password authentication failed`
**解决方案**
1. 确保 PostgreSQL 正在运行
2. 检查 `.env` 中的账号密码是否正确
3. 验证用户权限:
```bash
psql -U postgres -c "SELECT current_user;"
```
### Redis 连接失败
**问题**Redis 连接失败但服务继续运行
**说明**
- Redis 连接失败时,应用会自动降级为仅使用内存缓存
- 不影响核心功能,但会失去跨进程缓存能力
- 日志会显示警告:`⚠ Redis connection failed`
**解决方案**
1. 确保 Redis 正在运行:`redis-cli ping`
2. 检查 Redis 端口:`lsof -i :6379`
3. 重启 Redis
- macOS: `brew services restart redis`
- Linux: `sudo systemctl restart redis`
### 数据库已存在
**问题**`database "cosmo_db" already exists`
**说明**:这是正常提示,不是错误。
**解决方案**
- 如果需要重置数据库,先删除再创建:
```bash
psql -U postgres -c "DROP DATABASE cosmo_db;"
python scripts/create_db.py
python scripts/init_db.py
```
## 生产环境配置
生产环境建议修改以下配置:
```bash
# 安全配置
CORS_ORIGINS=["https://yourdomain.com"] # 限制跨域来源
# 数据库优化
DATABASE_POOL_SIZE=50 # 增加连接池大小
DATABASE_MAX_OVERFLOW=20
# Redis 密码
REDIS_PASSWORD=your_secure_password # 设置 Redis 密码
```
## 配置管理最佳实践
1. **不要提交 `.env` 文件到 Git**
- `.env` 已在 `.gitignore`
- 只提交 `.env.example` 作为模板
2. **使用环境变量覆盖**
```bash
export DATABASE_PASSWORD=new_password
python app/main.py
```
3. **多环境配置**
```bash
.env.development # 开发环境
.env.production # 生产环境
.env.test # 测试环境
```
## 技术栈
- **FastAPI** - Web 框架
- **SQLAlchemy 2.0** - ORM异步模式
- **asyncpg** - PostgreSQL 异步驱动
- **Redis** - 缓存层
- **Pydantic Settings** - 配置管理
## 数据库设计
详细的数据库表结构设计请参考 [`DATABASE_SCHEMA.md`](./DATABASE_SCHEMA.md)。
主要数据表:
- `celestial_bodies` - 天体基本信息
- `positions` - 位置历史(时间序列)
- `resources` - 资源文件管理
- `static_data` - 静态天文数据
- `nasa_cache` - NASA API 缓存

450
DATABASE_SCHEMA.md 100644
View File

@ -0,0 +1,450 @@
# Cosmo 数据库表结构设计
## 数据库信息
- **数据库类型**: PostgreSQL 15+
- **数据库名称**: cosmo_db
- **字符集**: UTF8
---
## 表结构
### 1. celestial_bodies - 天体基本信息表
存储所有天体的基本信息和元数据。
```sql
CREATE TABLE celestial_bodies (
id VARCHAR(50) PRIMARY KEY, -- JPL Horizons ID 或自定义ID
name VARCHAR(200) NOT NULL, -- 英文名称
name_zh VARCHAR(200), -- 中文名称
type VARCHAR(50) NOT NULL, -- 天体类型: star, planet, moon, probe, comet, asteroid, etc.
description TEXT, -- 描述
metadata JSONB, -- 扩展元数据launch_date, status, mass, radius等
is_active bool, -- 天体有效状态
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
CONSTRAINT chk_type CHECK (type IN ('star', 'planet', 'moon', 'probe', 'comet', 'asteroid', 'dwarf_planet', 'satellite'))
);
-- 索引
CREATE INDEX idx_celestial_bodies_type ON celestial_bodies(type);
CREATE INDEX idx_celestial_bodies_name ON celestial_bodies(name);
-- 注释
COMMENT ON TABLE celestial_bodies IS '天体基本信息表';
COMMENT ON COLUMN celestial_bodies.id IS 'JPL Horizons ID如-31代表Voyager 1或自定义ID';
COMMENT ON COLUMN celestial_bodies.type IS '天体类型star(恒星), planet(行星), moon(卫星), probe(探测器), comet(彗星), asteroid(小行星)';
COMMENT ON COLUMN celestial_bodies.metadata IS 'JSON格式的扩展元数据例如{"launch_date": "1977-09-05", "status": "active", "mass": 722, "radius": 2575}';
```
**metadata JSONB字段示例**:
```json
{
"launch_date": "1977-09-05",
"status": "active",
"mass": 722, // kg
"radius": 2575, // km
"orbit_period": 365.25, // days
"rotation_period": 24, // hours
"discovery_date": "1930-02-18",
"discoverer": "Clyde Tombaugh"
}
```
---
### 2. positions - 位置历史表(时间序列)
存储天体的位置历史数据,支持历史查询和轨迹回放。
```sql
CREATE TABLE positions (
id BIGSERIAL PRIMARY KEY,
body_id VARCHAR(50) NOT NULL REFERENCES celestial_bodies(id) ON DELETE CASCADE,
time TIMESTAMP NOT NULL, -- 位置时间点
x DOUBLE PRECISION NOT NULL, -- X坐标AU日心坐标系
y DOUBLE PRECISION NOT NULL, -- Y坐标AU
z DOUBLE PRECISION NOT NULL, -- Z坐标AU
vx DOUBLE PRECISION, -- X方向速度可选
vy DOUBLE PRECISION, -- Y方向速度可选
vz DOUBLE PRECISION, -- Z方向速度可选
source VARCHAR(50) DEFAULT 'nasa_horizons', -- 数据来源
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_source CHECK (source IN ('nasa_horizons', 'calculated', 'user_defined', 'imported'))
);
-- 索引(非常重要,用于高效查询)
CREATE INDEX idx_positions_body_time ON positions(body_id, time DESC);
CREATE INDEX idx_positions_time ON positions(time);
CREATE INDEX idx_positions_body_id ON positions(body_id);
-- 注释
COMMENT ON TABLE positions IS '天体位置历史表(时间序列数据)';
COMMENT ON COLUMN positions.body_id IS '关联celestial_bodies表的天体ID';
COMMENT ON COLUMN positions.time IS '该位置的观测/计算时间UTC';
COMMENT ON COLUMN positions.x IS 'X坐标单位AU天文单位日心坐标系';
COMMENT ON COLUMN positions.source IS '数据来源nasa_horizons(NASA API), calculated(计算), user_defined(用户定义), imported(导入)';
```
**使用场景**:
- 查询某天体在某时间点的位置
- 查询某天体在时间范围内的轨迹
- 支持时间旅行功能(回放历史位置)
---
### 3. resources - 资源文件管理表
统一管理纹理、3D模型、图标等静态资源。
```sql
CREATE TABLE resources (
id SERIAL PRIMARY KEY,
body_id VARCHAR(50) REFERENCES celestial_bodies(id) ON DELETE CASCADE,
resource_type VARCHAR(50) NOT NULL, -- 资源类型
file_path VARCHAR(500) NOT NULL, -- 相对于upload目录的路径
file_size INTEGER, -- 文件大小bytes
mime_type VARCHAR(100), -- MIME类型
metadata JSONB, -- 扩展信息(分辨率、格式等)
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_resource_type CHECK (resource_type IN ('texture', 'model', 'icon', 'thumbnail', 'data'))
);
-- 索引
CREATE INDEX idx_resources_body_id ON resources(body_id);
CREATE INDEX idx_resources_type ON resources(resource_type);
-- 注释
COMMENT ON TABLE resources IS '资源文件管理表(纹理、模型、图标等)';
COMMENT ON COLUMN resources.resource_type IS '资源类型texture(纹理), model(3D模型), icon(图标), thumbnail(缩略图), data(数据文件)';
COMMENT ON COLUMN resources.file_path IS '相对路径例如textures/planets/earth_2k.jpg';
COMMENT ON COLUMN resources.metadata IS 'JSON格式元数据例如{"width": 2048, "height": 1024, "format": "jpg"}';
```
**metadata JSONB字段示例**:
```json
{
"width": 2048,
"height": 1024,
"format": "jpg",
"color_space": "sRGB",
"model_format": "glb",
"polygon_count": 15000
}
```
---
### 4. static_data - 静态数据表
存储星座、星系、恒星等不需要动态计算的静态天文数据。
```sql
CREATE TABLE static_data (
id SERIAL PRIMARY KEY,
category VARCHAR(50) NOT NULL, -- 数据分类
name VARCHAR(200) NOT NULL, -- 名称
name_zh VARCHAR(200), -- 中文名称
data JSONB NOT NULL, -- 完整数据JSON格式
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_category CHECK (category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster')),
CONSTRAINT uq_category_name UNIQUE (category, name)
);
-- 索引
CREATE INDEX idx_static_data_category ON static_data(category);
CREATE INDEX idx_static_data_name ON static_data(name);
CREATE INDEX idx_static_data_data ON static_data USING GIN(data); -- JSONB索引
-- 注释
COMMENT ON TABLE static_data IS '静态天文数据表(星座、星系、恒星等)';
COMMENT ON COLUMN static_data.category IS '数据分类constellation(星座), galaxy(星系), star(恒星), nebula(星云), cluster(星团)';
COMMENT ON COLUMN static_data.data IS 'JSON格式的完整数据结构根据category不同而不同';
```
**data JSONB字段示例**:
**星座数据**:
```json
{
"stars": [
{"name": "Betelgeuse", "ra": 88.79, "dec": 7.41},
{"name": "Rigel", "ra": 78.63, "dec": -8.20}
],
"lines": [[0, 1], [1, 2]],
"mythology": "猎户座的神话故事..."
}
```
**星系数据**:
```json
{
"type": "spiral",
"distance_mly": 2.537,
"ra": 10.68,
"dec": 41.27,
"magnitude": 3.44,
"diameter_kly": 220,
"color": "#88aaff"
}
```
---
### 5. nasa_cache - NASA API缓存表
持久化NASA Horizons API的响应结果减少API调用。
```sql
CREATE TABLE nasa_cache (
cache_key VARCHAR(500) PRIMARY KEY, -- 缓存键body_id:start:end:step
body_id VARCHAR(50),
start_time TIMESTAMP, -- 查询起始时间
end_time TIMESTAMP, -- 查询结束时间
step VARCHAR(10), -- 时间步长(如'1d'
data JSONB NOT NULL, -- 完整的API响应数据
expires_at TIMESTAMP NOT NULL, -- 过期时间
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_time_range CHECK (end_time >= start_time)
);
-- 索引
CREATE INDEX idx_nasa_cache_body_id ON nasa_cache(body_id);
CREATE INDEX idx_nasa_cache_expires ON nasa_cache(expires_at);
CREATE INDEX idx_nasa_cache_time_range ON nasa_cache(body_id, start_time, end_time);
-- 自动清理过期缓存可选需要pg_cron扩展
-- SELECT cron.schedule('clean_expired_cache', '0 0 * * *', 'DELETE FROM nasa_cache WHERE expires_at < NOW()');
-- 注释
COMMENT ON TABLE nasa_cache IS 'NASA Horizons API响应缓存表';
COMMENT ON COLUMN nasa_cache.cache_key IS '缓存键格式:{body_id}:{start}:{end}:{step},例如:-31:2025-11-27:2025-11-28:1d';
COMMENT ON COLUMN nasa_cache.data IS 'NASA API的完整JSON响应';
COMMENT ON COLUMN nasa_cache.expires_at IS '缓存过期时间,过期后自动失效';
```
---
## 初始化脚本
### 创建数据库
```sql
-- 连接到PostgreSQL
psql -U postgres
-- 创建数据库
CREATE DATABASE cosmo_db
WITH
ENCODING = 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TEMPLATE = template0;
-- 连接到新数据库
\c cosmo_db
-- 创建必要的扩展(可选)
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; -- UUID生成
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- 模糊搜索
```
### 完整建表脚本
```sql
-- 按依赖顺序创建表
-- 1. 天体基本信息表
CREATE TABLE celestial_bodies (
id VARCHAR(50) PRIMARY KEY,
name VARCHAR(200) NOT NULL,
name_zh VARCHAR(200),
type VARCHAR(50) NOT NULL,
description TEXT,
metadata JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_type CHECK (type IN ('star', 'planet', 'moon', 'probe', 'comet', 'asteroid', 'dwarf_planet', 'satellite'))
);
CREATE INDEX idx_celestial_bodies_type ON celestial_bodies(type);
CREATE INDEX idx_celestial_bodies_name ON celestial_bodies(name);
-- 2. 位置历史表
CREATE TABLE positions (
id BIGSERIAL PRIMARY KEY,
body_id VARCHAR(50) NOT NULL REFERENCES celestial_bodies(id) ON DELETE CASCADE,
time TIMESTAMP NOT NULL,
x DOUBLE PRECISION NOT NULL,
y DOUBLE PRECISION NOT NULL,
z DOUBLE PRECISION NOT NULL,
vx DOUBLE PRECISION,
vy DOUBLE PRECISION,
vz DOUBLE PRECISION,
source VARCHAR(50) DEFAULT 'nasa_horizons',
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_source CHECK (source IN ('nasa_horizons', 'calculated', 'user_defined', 'imported'))
);
CREATE INDEX idx_positions_body_time ON positions(body_id, time DESC);
CREATE INDEX idx_positions_time ON positions(time);
CREATE INDEX idx_positions_body_id ON positions(body_id);
-- 3. 资源管理表
CREATE TABLE resources (
id SERIAL PRIMARY KEY,
body_id VARCHAR(50) REFERENCES celestial_bodies(id) ON DELETE CASCADE,
resource_type VARCHAR(50) NOT NULL,
file_path VARCHAR(500) NOT NULL,
file_size INTEGER,
mime_type VARCHAR(100),
metadata JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_resource_type CHECK (resource_type IN ('texture', 'model', 'icon', 'thumbnail', 'data'))
);
CREATE INDEX idx_resources_body_id ON resources(body_id);
CREATE INDEX idx_resources_type ON resources(resource_type);
-- 4. 静态数据表
CREATE TABLE static_data (
id SERIAL PRIMARY KEY,
category VARCHAR(50) NOT NULL,
name VARCHAR(200) NOT NULL,
name_zh VARCHAR(200),
data JSONB NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_category CHECK (category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster')),
CONSTRAINT uq_category_name UNIQUE (category, name)
);
CREATE INDEX idx_static_data_category ON static_data(category);
CREATE INDEX idx_static_data_name ON static_data(name);
CREATE INDEX idx_static_data_data ON static_data USING GIN(data);
-- 5. NASA缓存表
CREATE TABLE nasa_cache (
cache_key VARCHAR(500) PRIMARY KEY,
body_id VARCHAR(50),
start_time TIMESTAMP,
end_time TIMESTAMP,
step VARCHAR(10),
data JSONB NOT NULL,
expires_at TIMESTAMP NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT chk_time_range CHECK (end_time >= start_time)
);
CREATE INDEX idx_nasa_cache_body_id ON nasa_cache(body_id);
CREATE INDEX idx_nasa_cache_expires ON nasa_cache(expires_at);
CREATE INDEX idx_nasa_cache_time_range ON nasa_cache(body_id, start_time, end_time);
```
---
## 数据关系图
```
celestial_bodies (天体)
├── positions (1:N) - 天体位置历史
├── resources (1:N) - 天体资源文件
└── nasa_cache (1:N) - NASA API缓存
static_data (静态数据) - 独立表不关联celestial_bodies
```
---
## 查询示例
### 查询某天体的最新位置
```sql
SELECT b.name, b.name_zh, p.x, p.y, p.z, p.time
FROM celestial_bodies b
LEFT JOIN LATERAL (
SELECT * FROM positions
WHERE body_id = b.id
ORDER BY time DESC
LIMIT 1
) p ON true
WHERE b.id = '-31';
```
### 查询某天体在时间范围内的轨迹
```sql
SELECT time, x, y, z
FROM positions
WHERE body_id = '-31'
AND time BETWEEN '2025-01-01' AND '2025-12-31'
ORDER BY time;
```
### 查询所有带纹理的行星
```sql
SELECT b.name, r.file_path
FROM celestial_bodies b
INNER JOIN resources r ON b.id = r.body_id
WHERE b.type = 'planet' AND r.resource_type = 'texture';
```
### 查询所有活跃的探测器
```sql
SELECT id, name, name_zh, metadata->>'status' as status
FROM celestial_bodies
WHERE type = 'probe'
AND metadata->>'status' = 'active';
```
---
## 维护建议
1. **定期清理过期缓存**:
```sql
DELETE FROM nasa_cache WHERE expires_at < NOW();
```
2. **分析表性能**:
```sql
ANALYZE celestial_bodies;
ANALYZE positions;
ANALYZE nasa_cache;
```
3. **重建索引(如果性能下降)**:
```sql
REINDEX TABLE positions;
```
4. **备份数据库**:
```bash
pg_dump -U postgres cosmo_db > backup_$(date +%Y%m%d).sql
```
---
## 扩展建议
### 未来可能需要的表
1. **users** - 用户表(如果需要用户系统)
2. **user_favorites** - 用户收藏(收藏的天体)
3. **observation_logs** - 观测日志(用户记录)
4. **simulation_configs** - 模拟配置(用户自定义场景)
### 性能优化扩展
1. **TimescaleDB** - 时间序列优化
```sql
CREATE EXTENSION IF NOT EXISTS timescaledb;
SELECT create_hypertable('positions', 'time');
```
2. **PostGIS** - 空间数据扩展
```sql
CREATE EXTENSION IF NOT EXISTS postgis;
ALTER TABLE positions ADD COLUMN geom geometry(POINTZ, 4326);
```

76
README.md 100644
View File

@ -0,0 +1,76 @@
# Cosmo Backend
Backend API for the Cosmo deep space explorer visualization system.
## Setup
1. Create virtual environment:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Copy environment file:
```bash
cp .env.example .env
```
## Running
Start the development server:
```bash
cd backend
python -m app.main
```
Or using uvicorn directly:
```bash
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
```
The API will be available at:
- API: http://localhost:8000/api
- Docs: http://localhost:8000/docs
- Health: http://localhost:8000/health
## API Endpoints
### Get Celestial Positions
```
GET /api/celestial/positions
```
Query parameters:
- `start_time`: ISO 8601 datetime (optional)
- `end_time`: ISO 8601 datetime (optional)
- `step`: Time step, e.g., "1d", "12h" (default: "1d")
Example:
```
http://localhost:8000/api/celestial/positions?start_time=2025-01-01T00:00:00Z&end_time=2025-01-10T00:00:00Z&step=1d
```
### Get Body Info
```
GET /api/celestial/info/{body_id}
```
Example:
```
http://localhost:8000/api/celestial/info/-31
```
### List All Bodies
```
GET /api/celestial/list
```
### Clear Cache
```
POST /api/celestial/cache/clear
```

0
app/__init__.py 100644
View File

View File

297
app/api/auth.py 100644
View File

@ -0,0 +1,297 @@
"""
Authentication API routes
"""
from datetime import datetime, timedelta
from fastapi import APIRouter, HTTPException, Depends, status
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, update
from sqlalchemy.orm import selectinload
from pydantic import BaseModel
from app.database import get_db
from app.models.db import User, Role, Menu
from app.services.auth import verify_password, create_access_token, hash_password
from app.services.auth_deps import get_current_user
from app.services.token_service import token_service
from app.config import settings
# HTTP Bearer security
security = HTTPBearer()
router = APIRouter(prefix="/auth", tags=["auth"])
# Pydantic models
class LoginRequest(BaseModel):
username: str
password: str
class RegisterRequest(BaseModel):
username: str
password: str
email: str | None = None
full_name: str | None = None
class LoginResponse(BaseModel):
access_token: str
token_type: str = "bearer"
user: dict
class UserInfo(BaseModel):
id: int
username: str
email: str | None
full_name: str | None
roles: list[str]
class MenuNode(BaseModel):
id: int
name: str
title: str
icon: str | None
path: str | None
component: str | None
children: list['MenuNode'] | None = None
@router.post("/register", response_model=LoginResponse)
async def register(
register_data: RegisterRequest,
db: AsyncSession = Depends(get_db)
):
"""
Register a new user
"""
# Check if username already exists
result = await db.execute(
select(User).where(User.username == register_data.username)
)
if result.scalar_one_or_none():
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Username already registered"
)
# Check if email already exists (if provided)
if register_data.email:
result = await db.execute(
select(User).where(User.email == register_data.email)
)
if result.scalar_one_or_none():
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Email already registered"
)
# Get 'user' role
result = await db.execute(
select(Role).where(Role.name == "user")
)
user_role = result.scalar_one_or_none()
if not user_role:
# Should not happen if seeded correctly, but fallback handling
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Default role 'user' not found"
)
# Create new user
new_user = User(
username=register_data.username,
password_hash=hash_password(register_data.password),
email=register_data.email,
full_name=register_data.full_name,
is_active=True
)
db.add(new_user)
await db.flush() # Flush to get ID
# Assign role
from app.models.db.user import user_roles
await db.execute(
user_roles.insert().values(
user_id=new_user.id,
role_id=user_role.id
)
)
await db.commit()
await db.refresh(new_user)
# Create access token
access_token = create_access_token(
data={"sub": str(new_user.id), "username": new_user.username}
)
# Save token to Redis
await token_service.save_token(access_token, new_user.id, new_user.username)
# Return token and user info (simulate fetch with roles loaded)
return LoginResponse(
access_token=access_token,
user={
"id": new_user.id,
"username": new_user.username,
"email": new_user.email,
"full_name": new_user.full_name,
"roles": [user_role.name]
}
)
@router.post("/login", response_model=LoginResponse)
async def login(
login_data: LoginRequest,
db: AsyncSession = Depends(get_db)
):
"""
Login with username and password
Returns JWT access token
"""
# Query user with roles
result = await db.execute(
select(User)
.options(selectinload(User.roles))
.where(User.username == login_data.username)
)
user = result.scalar_one_or_none()
# Verify user exists and password is correct
if not user or not verify_password(login_data.password, user.password_hash):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"},
)
# Check if user is active
if not user.is_active:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Inactive user"
)
# Update last login time
await db.execute(
update(User)
.where(User.id == user.id)
.values(last_login_at=datetime.utcnow())
)
await db.commit()
# Create access token
access_token = create_access_token(
data={"sub": str(user.id), "username": user.username}
)
# Save token to Redis
await token_service.save_token(access_token, user.id, user.username)
# Return token and user info
return LoginResponse(
access_token=access_token,
user={
"id": user.id,
"username": user.username,
"email": user.email,
"full_name": user.full_name,
"roles": [role.name for role in user.roles]
}
)
@router.get("/me", response_model=UserInfo)
async def get_current_user_info(
current_user: User = Depends(get_current_user)
):
"""
Get current user information
"""
return UserInfo(
id=current_user.id,
username=current_user.username,
email=current_user.email,
full_name=current_user.full_name,
roles=[role.name for role in current_user.roles]
)
@router.get("/menus", response_model=list[MenuNode])
async def get_user_menus(
current_user: User = Depends(get_current_user),
db: AsyncSession = Depends(get_db)
):
"""
Get menus accessible to current user based on their roles
"""
# Get all role IDs for current user
role_ids = [role.id for role in current_user.roles]
if not role_ids:
return []
# Query menus for user's roles
from app.models.db.menu import RoleMenu
result = await db.execute(
select(Menu)
.join(RoleMenu, RoleMenu.menu_id == Menu.id)
.where(RoleMenu.role_id.in_(role_ids))
.where(Menu.is_active == True)
.order_by(Menu.sort_order)
.distinct()
)
menus = result.scalars().all()
# Build tree structure
menu_dict = {}
root_menus = []
for menu in menus:
menu_node = MenuNode(
id=menu.id,
name=menu.name,
title=menu.title,
icon=menu.icon,
path=menu.path,
component=menu.component,
children=[]
)
menu_dict[menu.id] = menu_node
if menu.parent_id is None:
root_menus.append(menu_node)
# Attach children to parents
for menu in menus:
if menu.parent_id and menu.parent_id in menu_dict:
parent = menu_dict[menu.parent_id]
if parent.children is None:
parent.children = []
parent.children.append(menu_dict[menu.id])
# Remove empty children lists
for menu_node in menu_dict.values():
if menu_node.children == []:
menu_node.children = None
return root_menus
@router.post("/logout")
async def logout(
credentials: HTTPAuthorizationCredentials = Depends(security)
):
"""
Logout - revoke current token
"""
token = credentials.credentials
await token_service.revoke_token(token)
return {"message": "Logged out successfully"}

47
app/api/danmaku.py 100644
View File

@ -0,0 +1,47 @@
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.ext.asyncio import AsyncSession
from pydantic import BaseModel, constr
from typing import List
from app.database import get_db
from app.models.db import User
from app.services.auth_deps import get_current_user
from app.services.danmaku_service import danmaku_service
router = APIRouter(prefix="/danmaku", tags=["danmaku"])
class DanmakuCreate(BaseModel):
text: constr(max_length=20, min_length=1)
class DanmakuResponse(BaseModel):
id: str
uid: str
username: str
text: str
ts: float
@router.post("/send", response_model=DanmakuResponse)
async def send_danmaku(
data: DanmakuCreate,
current_user: User = Depends(get_current_user),
db: AsyncSession = Depends(get_db)
):
"""Send a short danmaku message (max 20 chars)"""
try:
result = await danmaku_service.add_danmaku(
user_id=current_user.id,
username=current_user.username,
text=data.text,
db=db
)
return result
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@router.get("/list", response_model=List[DanmakuResponse])
async def get_danmaku_list(
db: AsyncSession = Depends(get_db)
):
"""Get all active danmaku messages"""
# This endpoint is public (or could be protected if needed)
return await danmaku_service.get_active_danmaku(db)

1547
app/api/routes.py 100644

File diff suppressed because it is too large Load Diff

253
app/api/system.py 100644
View File

@ -0,0 +1,253 @@
"""
System Settings API Routes
"""
from fastapi import APIRouter, HTTPException, Query, Depends, status
from sqlalchemy.ext.asyncio import AsyncSession
from typing import Optional, Dict, Any, List
import logging
from pydantic import BaseModel
from app.services.system_settings_service import system_settings_service
from app.services.redis_cache import redis_cache
from app.services.cache import cache_service
from app.database import get_db
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/system", tags=["system"])
# Pydantic models
class SettingCreate(BaseModel):
key: str
value: Any
value_type: str = "string"
category: str = "general"
label: str
description: Optional[str] = None
is_public: bool = False
class SettingUpdate(BaseModel):
value: Optional[Any] = None
value_type: Optional[str] = None
category: Optional[str] = None
label: Optional[str] = None
description: Optional[str] = None
is_public: Optional[bool] = None
# ============================================================
# System Settings CRUD APIs
# ============================================================
@router.get("/settings")
async def list_settings(
category: Optional[str] = Query(None, description="Filter by category"),
is_public: Optional[bool] = Query(None, description="Filter by public status"),
db: AsyncSession = Depends(get_db)
):
"""
Get all system settings
Query parameters:
- category: Optional filter by category (e.g., 'visualization', 'cache', 'ui')
- is_public: Optional filter by public status (true for frontend-accessible settings)
"""
settings = await system_settings_service.get_all_settings(db, category, is_public)
result = []
for setting in settings:
# Parse value based on type
parsed_value = await system_settings_service.get_setting_value(setting.key, db)
result.append({
"id": setting.id,
"key": setting.key,
"value": parsed_value,
"raw_value": setting.value,
"value_type": setting.value_type,
"category": setting.category,
"label": setting.label,
"description": setting.description,
"is_public": setting.is_public,
"created_at": setting.created_at.isoformat() if setting.created_at else None,
"updated_at": setting.updated_at.isoformat() if setting.updated_at else None,
})
return {"settings": result}
@router.get("/settings/{key}")
async def get_setting(
key: str,
db: AsyncSession = Depends(get_db)
):
"""Get a single setting by key"""
setting = await system_settings_service.get_setting(key, db)
if not setting:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Setting '{key}' not found"
)
parsed_value = await system_settings_service.get_setting_value(key, db)
return {
"id": setting.id,
"key": setting.key,
"value": parsed_value,
"raw_value": setting.value,
"value_type": setting.value_type,
"category": setting.category,
"label": setting.label,
"description": setting.description,
"is_public": setting.is_public,
"created_at": setting.created_at.isoformat() if setting.created_at else None,
"updated_at": setting.updated_at.isoformat() if setting.updated_at else None,
}
@router.post("/settings", status_code=status.HTTP_201_CREATED)
async def create_setting(
data: SettingCreate,
db: AsyncSession = Depends(get_db)
):
"""Create a new system setting"""
# Check if setting already exists
existing = await system_settings_service.get_setting(data.key, db)
if existing:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Setting '{data.key}' already exists"
)
new_setting = await system_settings_service.create_setting(data.dict(), db)
await db.commit()
parsed_value = await system_settings_service.get_setting_value(data.key, db)
return {
"id": new_setting.id,
"key": new_setting.key,
"value": parsed_value,
"value_type": new_setting.value_type,
"category": new_setting.category,
"label": new_setting.label,
"description": new_setting.description,
"is_public": new_setting.is_public,
}
@router.put("/settings/{key}")
async def update_setting(
key: str,
data: SettingUpdate,
db: AsyncSession = Depends(get_db)
):
"""Update a system setting"""
update_data = {k: v for k, v in data.dict().items() if v is not None}
updated = await system_settings_service.update_setting(key, update_data, db)
if not updated:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Setting '{key}' not found"
)
await db.commit()
parsed_value = await system_settings_service.get_setting_value(key, db)
return {
"id": updated.id,
"key": updated.key,
"value": parsed_value,
"value_type": updated.value_type,
"category": updated.category,
"label": updated.label,
"description": updated.description,
"is_public": updated.is_public,
}
@router.delete("/settings/{key}")
async def delete_setting(
key: str,
db: AsyncSession = Depends(get_db)
):
"""Delete a system setting"""
deleted = await system_settings_service.delete_setting(key, db)
if not deleted:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Setting '{key}' not found"
)
await db.commit()
return {"message": f"Setting '{key}' deleted successfully"}
# ============================================================
# Cache Management APIs
# ============================================================
@router.post("/cache/clear")
async def clear_all_caches():
"""
Clear all caches (memory + Redis)
This is a critical operation for platform management.
It clears:
- Memory cache (in-process)
- Redis cache (all positions and NASA data)
"""
logger.info("🧹 Starting cache clear operation...")
# Clear memory cache
cache_service.clear()
logger.info("✓ Memory cache cleared")
# Clear Redis cache
positions_cleared = await redis_cache.clear_pattern("positions:*")
nasa_cleared = await redis_cache.clear_pattern("nasa:*")
logger.info(f"✓ Redis cache cleared ({positions_cleared + nasa_cleared} keys)")
total_cleared = positions_cleared + nasa_cleared
return {
"message": f"All caches cleared successfully ({total_cleared} Redis keys deleted)",
"memory_cache": "cleared",
"redis_cache": {
"positions_keys": positions_cleared,
"nasa_keys": nasa_cleared,
"total": total_cleared
}
}
@router.get("/cache/stats")
async def get_cache_stats():
"""Get cache statistics"""
redis_stats = await redis_cache.get_stats()
return {
"redis": redis_stats,
"memory": {
"description": "In-memory cache (process-level)",
"note": "Statistics not available for in-memory cache"
}
}
@router.post("/settings/init-defaults")
async def initialize_default_settings(
db: AsyncSession = Depends(get_db)
):
"""Initialize default system settings (admin use)"""
await system_settings_service.initialize_default_settings(db)
await db.commit()
return {"message": "Default settings initialized successfully"}

120
app/api/user.py 100644
View File

@ -0,0 +1,120 @@
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import selectinload
from sqlalchemy import select, func
from typing import List
from pydantic import BaseModel
from app.database import get_db
from app.models.db import User
from app.services.auth import hash_password
from app.services.auth_deps import get_current_user, require_admin # To protect endpoints
router = APIRouter(prefix="/users", tags=["users"])
# Pydantic models
class UserListItem(BaseModel):
id: int
username: str
email: str | None
full_name: str | None
is_active: bool
roles: list[str]
last_login_at: str | None
created_at: str
class Config:
orm_mode = True
class UserStatusUpdate(BaseModel):
is_active: bool
@router.get("/list")
async def get_user_list(
db: AsyncSession = Depends(get_db),
current_user: User = Depends(get_current_user) # Protect this route
):
"""Get a list of all users"""
# Ensure only admins can see all users
if "admin" not in [role.name for role in current_user.roles]:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Not authorized")
result = await db.execute(
select(User).options(selectinload(User.roles)).order_by(User.id)
)
users = result.scalars().all()
users_list = []
for user in users:
users_list.append({
"id": user.id,
"username": user.username,
"email": user.email,
"full_name": user.full_name,
"is_active": user.is_active,
"roles": [role.name for role in user.roles],
"last_login_at": user.last_login_at.isoformat() if user.last_login_at else None,
"created_at": user.created_at.isoformat()
})
return {"users": users_list}
@router.put("/{user_id}/status")
async def update_user_status(
user_id: int,
status_update: UserStatusUpdate,
db: AsyncSession = Depends(get_db),
current_user: User = Depends(get_current_user)
):
"""Update a user's active status"""
if "admin" not in [role.name for role in current_user.roles]:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Not authorized")
result = await db.execute(select(User).where(User.id == user_id))
user = result.scalar_one_or_none()
if not user:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="User not found")
user.is_active = status_update.is_active
await db.commit()
return {"message": "User status updated successfully"}
@router.post("/{user_id}/reset-password")
async def reset_user_password(
user_id: int,
db: AsyncSession = Depends(get_db),
current_user: User = Depends(get_current_user)
):
"""Reset a user's password to the default"""
if "admin" not in [role.name for role in current_user.roles]:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Not authorized")
result = await db.execute(select(User).where(User.id == user_id))
user = result.scalar_one_or_none()
if not user:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="User not found")
# Hardcoded default password for now.
# TODO: Move to a configurable system parameter.
default_password = "password123"
user.password_hash = hash_password(default_password)
await db.commit()
return {"message": f"Password for user {user.username} has been reset."}
@router.get("/count", response_model=dict)
async def get_user_count(
db: AsyncSession = Depends(get_db),
current_admin_user: User = Depends(require_admin) # Ensure only admin can access
):
"""
Get the total count of registered users.
"""
result = await db.execute(select(func.count(User.id)))
total_users = result.scalar_one()
return {"total_users": total_users}

65
app/config.py 100644
View File

@ -0,0 +1,65 @@
"""
Application configuration
"""
from pydantic_settings import BaseSettings
from pydantic import Field
class Settings(BaseSettings):
"""Application settings"""
# Application
app_name: str = "Cosmo - Deep Space Explorer"
api_prefix: str = "/api"
# CORS settings - allow all origins for development (IP access support)
cors_origins: list[str] = ["*"]
# Cache settings
cache_ttl_days: int = 3
# JWT settings
jwt_secret_key: str = "your-secret-key-change-this-in-production"
jwt_algorithm: str = "HS256"
jwt_access_token_expire_minutes: int = 60 * 24 # 24 hours
# Database settings (PostgreSQL)
database_host: str = "localhost"
database_port: int = 5432
database_name: str = "cosmo_db"
database_user: str = "postgres"
database_password: str = "postgres"
database_pool_size: int = 20
database_max_overflow: int = 10
# Redis settings
redis_host: str = "localhost"
redis_port: int = 6379
redis_db: int = 0
redis_password: str = ""
redis_max_connections: int = 50
# File upload settings
upload_dir: str = "upload"
max_upload_size: int = 10485760 # 10MB
@property
def database_url(self) -> str:
"""Construct database URL for SQLAlchemy"""
return (
f"postgresql+asyncpg://{self.database_user}:{self.database_password}"
f"@{self.database_host}:{self.database_port}/{self.database_name}"
)
@property
def redis_url(self) -> str:
"""Construct Redis URL"""
if self.redis_password:
return f"redis://:{self.redis_password}@{self.redis_host}:{self.redis_port}/{self.redis_db}"
return f"redis://{self.redis_host}:{self.redis_port}/{self.redis_db}"
class Config:
env_file = ".env"
settings = Settings()

73
app/database.py 100644
View File

@ -0,0 +1,73 @@
"""
Database connection and session management
"""
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
from sqlalchemy.orm import declarative_base
from app.config import settings
import logging
logger = logging.getLogger(__name__)
# Create async engine
engine = create_async_engine(
settings.database_url,
echo=False, # Set to True for SQL query logging
pool_size=settings.database_pool_size,
max_overflow=settings.database_max_overflow,
pool_pre_ping=True, # Verify connections before using
)
# Create async session factory
AsyncSessionLocal = async_sessionmaker(
engine,
class_=AsyncSession,
expire_on_commit=False,
autocommit=False,
autoflush=False,
)
# Base class for ORM models
Base = declarative_base()
async def get_db() -> AsyncSession:
"""
Dependency function for FastAPI to get database sessions
Usage:
@app.get("/items")
async def read_items(db: AsyncSession = Depends(get_db)):
...
"""
async with AsyncSessionLocal() as session:
try:
yield session
await session.commit()
except Exception:
await session.rollback()
raise
finally:
await session.close()
async def init_db():
"""Initialize database - create all tables"""
from app.models.db import (
CelestialBody,
Position,
Resource,
StaticData,
NasaCache,
)
async with engine.begin() as conn:
# Create all tables
await conn.run_sync(Base.metadata.create_all)
logger.info("Database tables created successfully")
async def close_db():
"""Close database connections"""
await engine.dispose()
logger.info("Database connections closed")

157
app/main.py 100644
View File

@ -0,0 +1,157 @@
"""
Cosmo - Deep Space Explorer Backend API
FastAPI application entry point
"""
import sys
from pathlib import Path
# Add backend directory to Python path for direct execution
backend_dir = Path(__file__).resolve().parent.parent
if str(backend_dir) not in sys.path:
sys.path.insert(0, str(backend_dir))
import logging
from contextlib import asynccontextmanager
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from app.config import settings
from app.api.routes import router as celestial_router
from app.api.auth import router as auth_router
from app.api.user import router as user_router
from app.api.system import router as system_router
from app.api.danmaku import router as danmaku_router
from app.services.redis_cache import redis_cache
from app.services.cache_preheat import preheat_all_caches
from app.database import close_db
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
@asynccontextmanager
async def lifespan(app: FastAPI):
"""Application lifespan manager - startup and shutdown events"""
# Startup
logger.info("=" * 60)
logger.info("Starting Cosmo Backend API...")
logger.info("=" * 60)
# Connect to Redis
await redis_cache.connect()
# Initialize database tables (create if not exist)
from app.database import engine, Base
from app.models.db import SystemSettings # Import to register the model
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
logger.info("✓ Database tables initialized")
# Initialize default system settings
from app.database import AsyncSessionLocal
from app.services.system_settings_service import system_settings_service
async with AsyncSessionLocal() as db:
await system_settings_service.initialize_default_settings(db)
await db.commit()
logger.info("✓ Default system settings initialized")
# Preheat caches (load from database to Redis)
await preheat_all_caches()
logger.info("✓ Application started successfully")
logger.info("=" * 60)
yield
# Shutdown
logger.info("=" * 60)
logger.info("Shutting down Cosmo Backend API...")
# Disconnect Redis
await redis_cache.disconnect()
# Close database connections
await close_db()
logger.info("✓ Application shutdown complete")
logger.info("=" * 60)
# Create FastAPI app
app = FastAPI(
title=settings.app_name,
description="Backend API for deep space probe visualization using NASA JPL Horizons data",
version="1.0.0",
lifespan=lifespan,
)
# Configure CORS
app.add_middleware(
CORSMiddleware,
allow_origins=settings.cors_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Include routers
app.include_router(celestial_router, prefix=settings.api_prefix)
app.include_router(auth_router, prefix=settings.api_prefix)
app.include_router(user_router, prefix=settings.api_prefix)
app.include_router(system_router, prefix=settings.api_prefix)
app.include_router(danmaku_router, prefix=settings.api_prefix)
# Mount static files for uploaded resources
upload_dir = Path(__file__).parent.parent / "upload"
upload_dir.mkdir(exist_ok=True)
app.mount("/upload", StaticFiles(directory=str(upload_dir)), name="upload")
logger.info(f"Static files mounted at /upload -> {upload_dir}")
# Mount public assets directory
public_assets_dir = Path(__file__).parent.parent / "public" / "assets"
public_assets_dir.mkdir(parents=True, exist_ok=True)
app.mount("/public/assets", StaticFiles(directory=str(public_assets_dir)), name="public_assets")
logger.info(f"Public assets mounted at /public/assets -> {public_assets_dir}")
@app.get("/")
async def root():
"""Root endpoint"""
return {
"app": settings.app_name,
"version": "1.0.0",
"docs": "/docs",
"api": settings.api_prefix,
}
@app.get("/health")
async def health():
"""Health check endpoint with service status"""
from app.services.redis_cache import redis_cache
redis_stats = await redis_cache.get_stats()
return {
"status": "healthy",
"redis": redis_stats,
"database": "connected", # If we got here, database is working
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"app.main:app",
host="0.0.0.0",
port=8000,
reload=True,
log_level="info",
)

View File

View File

@ -0,0 +1,203 @@
"""
Data models for celestial bodies and positions
"""
from datetime import datetime
from typing import Literal
from pydantic import BaseModel, Field
class Position(BaseModel):
"""3D position in space (AU)"""
time: datetime = Field(..., description="Timestamp for this position")
x: float = Field(..., description="X coordinate in AU (heliocentric)")
y: float = Field(..., description="Y coordinate in AU (heliocentric)")
z: float = Field(..., description="Z coordinate in AU (heliocentric)")
class CelestialBody(BaseModel):
"""Celestial body (planet or probe)"""
id: str = Field(..., description="JPL Horizons ID")
name: str = Field(..., description="Display name")
name_zh: str | None = Field(None, description="Chinese name")
type: Literal["planet", "probe", "star", "dwarf_planet", "satellite", "comet"] = Field(..., description="Body type")
positions: list[Position] = Field(
default_factory=list, description="Position history"
)
description: str | None = Field(None, description="Description")
is_active: bool | None = Field(None, description="Active status (for probes: True=active, False=inactive)")
class CelestialDataResponse(BaseModel):
"""API response for celestial positions"""
timestamp: datetime = Field(
default_factory=datetime.utcnow, description="Data fetch timestamp"
)
bodies: list[CelestialBody] = Field(..., description="List of celestial bodies")
class BodyInfo(BaseModel):
"""Detailed information about a celestial body"""
id: str
name: str
type: Literal["planet", "probe", "star", "dwarf_planet", "satellite", "comet"]
description: str
launch_date: str | None = None
status: str | None = None
# Predefined celestial bodies
CELESTIAL_BODIES = {
# Probes
"-31": {
"name": "Voyager 1",
"name_zh": "旅行者1号",
"type": "probe",
"description": "离地球最远的人造物体,已进入星际空间",
"launch_date": "1977-09-05",
"status": "active",
},
"-32": {
"name": "Voyager 2",
"name_zh": "旅行者2号",
"type": "probe",
"description": "唯一造访过天王星和海王星的探测器",
"launch_date": "1977-08-20",
"status": "active",
},
"-98": {
"name": "New Horizons",
"name_zh": "新视野号",
"type": "probe",
"description": "飞掠冥王星,正处于柯伊伯带",
"launch_date": "2006-01-19",
"status": "active",
},
"-96": {
"name": "Parker Solar Probe",
"name_zh": "帕克太阳探测器",
"type": "probe",
"description": "正在'触摸'太阳,速度最快的人造物体",
"launch_date": "2018-08-12",
"status": "active",
},
"-61": {
"name": "Juno",
"name_zh": "朱诺号",
"type": "probe",
"description": "正在木星轨道运行",
"launch_date": "2011-08-05",
"status": "active",
},
"-82": {
"name": "Cassini",
"name_zh": "卡西尼号",
"type": "probe",
"description": "土星探测器已于2017年撞击销毁",
"launch_date": "1997-10-15",
"status": "inactive",
},
"-168": {
"name": "Perseverance",
"name_zh": "毅力号",
"type": "probe",
"description": "火星探测车",
"launch_date": "2020-07-30",
"status": "active",
},
# Planets
"10": {
"name": "Sun",
"name_zh": "太阳",
"type": "star",
"description": "太阳,太阳系的中心",
},
"199": {
"name": "Mercury",
"name_zh": "水星",
"type": "planet",
"description": "水星,距离太阳最近的行星",
},
"299": {
"name": "Venus",
"name_zh": "金星",
"type": "planet",
"description": "金星,太阳系中最热的行星",
},
"399": {
"name": "Earth",
"name_zh": "地球",
"type": "planet",
"description": "地球,我们的家园",
},
"301": {
"name": "Moon",
"name_zh": "月球",
"type": "planet",
"description": "月球,地球的天然卫星",
},
"499": {
"name": "Mars",
"name_zh": "火星",
"type": "planet",
"description": "火星,红色星球",
},
"599": {
"name": "Jupiter",
"name_zh": "木星",
"type": "planet",
"description": "木星,太阳系中最大的行星",
},
"699": {
"name": "Saturn",
"name_zh": "土星",
"type": "planet",
"description": "土星,拥有美丽的光环",
},
"799": {
"name": "Uranus",
"name_zh": "天王星",
"type": "planet",
"description": "天王星,侧躺着自转的行星",
},
"899": {
"name": "Neptune",
"name_zh": "海王星",
"type": "planet",
"description": "海王星,太阳系最外层的行星",
},
"999": {
"name": "Pluto",
"name_zh": "冥王星",
"type": "dwarf_planet",
"description": "冥王星,曾经的第九大行星,现为矮行星",
},
# Dwarf Planets
"2000001": {
"name": "Ceres",
"name_zh": "谷神星",
"type": "dwarf_planet",
"description": "谷神星,小行星带中最大的天体,也是唯一的矮行星",
},
"136199": {
"name": "Eris",
"name_zh": "阋神星",
"type": "dwarf_planet",
"description": "阋神星,曾被认为是第十大行星,导致冥王星被降级为矮行星",
},
"136108": {
"name": "Haumea",
"name_zh": "妊神星",
"type": "dwarf_planet",
"description": "妊神星,形状像橄榄球的矮行星,拥有两颗卫星和光环",
},
"136472": {
"name": "Makemake",
"name_zh": "鸟神星",
"type": "dwarf_planet",
"description": "鸟神星,柯伊伯带中第二亮的天体",
},
}

View File

@ -0,0 +1,30 @@
"""
Database ORM models
"""
from .celestial_body import CelestialBody
from .position import Position
from .resource import Resource
from .static_data import StaticData
from .nasa_cache import NasaCache
from .orbit import Orbit
from .user import User, user_roles
from .role import Role
from .menu import Menu, RoleMenu
from .system_settings import SystemSettings
from .task import Task
__all__ = [
"CelestialBody",
"Position",
"Resource",
"StaticData",
"NasaCache",
"Orbit",
"User",
"Role",
"Menu",
"RoleMenu",
"SystemSettings",
"user_roles",
"Task",
]

View File

@ -0,0 +1,45 @@
"""
CelestialBody ORM model
"""
from sqlalchemy import Column, String, Text, TIMESTAMP, Boolean, CheckConstraint, Index
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.sql import func
from sqlalchemy.orm import relationship
from app.database import Base
class CelestialBody(Base):
"""Celestial body (star, planet, probe, etc.)"""
__tablename__ = "celestial_bodies"
id = Column(String(50), primary_key=True, comment="JPL Horizons ID or custom ID")
name = Column(String(200), nullable=False, comment="English name")
name_zh = Column(String(200), nullable=True, comment="Chinese name")
type = Column(String(50), nullable=False, comment="Body type")
description = Column(Text, nullable=True, comment="Description")
is_active = Column(Boolean, nullable=True, comment="Active status for probes (True=active, False=inactive)")
extra_data = Column(JSONB, nullable=True, comment="Extended metadata (JSON)")
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
# Relationships
positions = relationship(
"Position", back_populates="body", cascade="all, delete-orphan"
)
resources = relationship(
"Resource", back_populates="body", cascade="all, delete-orphan"
)
# Constraints
__table_args__ = (
CheckConstraint(
"type IN ('star', 'planet', 'moon', 'probe', 'comet', 'asteroid', 'dwarf_planet', 'satellite')",
name="chk_type",
),
Index("idx_celestial_bodies_type", "type"),
Index("idx_celestial_bodies_name", "name"),
)
def __repr__(self):
return f"<CelestialBody(id='{self.id}', name='{self.name}', type='{self.type}')>"

View File

@ -0,0 +1,64 @@
"""
Menu ORM model
"""
from sqlalchemy import Column, String, Integer, Boolean, Text, TIMESTAMP, ForeignKey, Index
from sqlalchemy.sql import func
from sqlalchemy.orm import relationship
from app.database import Base
class Menu(Base):
"""Backend menu items"""
__tablename__ = "menus"
id = Column(Integer, primary_key=True, autoincrement=True)
parent_id = Column(Integer, ForeignKey('menus.id', ondelete='CASCADE'), nullable=True, comment="Parent menu ID (NULL for root)")
name = Column(String(100), nullable=False, comment="Menu name")
title = Column(String(100), nullable=False, comment="Display title")
icon = Column(String(100), nullable=True, comment="Icon name (e.g., 'settings', 'database')")
path = Column(String(255), nullable=True, comment="Route path (e.g., '/admin/celestial-bodies')")
component = Column(String(255), nullable=True, comment="Component path (e.g., 'admin/CelestialBodies')")
sort_order = Column(Integer, default=0, nullable=False, comment="Display order (ascending)")
is_active = Column(Boolean, default=True, nullable=False, comment="Menu active status")
description = Column(Text, nullable=True, comment="Menu description")
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
# Relationships
children = relationship("Menu", back_populates="parent", cascade="all, delete-orphan")
parent = relationship("Menu", back_populates="children", remote_side=[id])
role_menus = relationship("RoleMenu", back_populates="menu", cascade="all, delete-orphan")
# Indexes
__table_args__ = (
Index("idx_menus_parent_id", "parent_id"),
Index("idx_menus_sort_order", "sort_order"),
)
def __repr__(self):
return f"<Menu(id={self.id}, name='{self.name}', path='{self.path}')>"
class RoleMenu(Base):
"""Role-Menu relationship (which menus each role can access)"""
__tablename__ = "role_menus"
id = Column(Integer, primary_key=True, autoincrement=True)
role_id = Column(Integer, ForeignKey('roles.id', ondelete='CASCADE'), nullable=False)
menu_id = Column(Integer, ForeignKey('menus.id', ondelete='CASCADE'), nullable=False)
created_at = Column(TIMESTAMP, server_default=func.now())
# Relationships
role = relationship("Role", back_populates="menus")
menu = relationship("Menu", back_populates="role_menus")
# Constraints
__table_args__ = (
Index("idx_role_menus_role_id", "role_id"),
Index("idx_role_menus_menu_id", "menu_id"),
)
def __repr__(self):
return f"<RoleMenu(role_id={self.role_id}, menu_id={self.menu_id})>"

View File

@ -0,0 +1,42 @@
"""
NasaCache ORM model - NASA Horizons API cache
"""
from sqlalchemy import Column, String, TIMESTAMP, CheckConstraint, Index
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.sql import func
from app.database import Base
class NasaCache(Base):
"""NASA Horizons API response cache"""
__tablename__ = "nasa_cache"
cache_key = Column(
String(500),
primary_key=True,
comment="Cache key: {body_id}:{start}:{end}:{step}",
)
body_id = Column(String(50), nullable=True, comment="Body ID")
start_time = Column(TIMESTAMP, nullable=True, comment="Query start time")
end_time = Column(TIMESTAMP, nullable=True, comment="Query end time")
step = Column(String(10), nullable=True, comment="Time step (e.g., '1d')")
data = Column(JSONB, nullable=False, comment="Complete API response (JSON)")
expires_at = Column(
TIMESTAMP, nullable=False, comment="Cache expiration time"
)
created_at = Column(TIMESTAMP, server_default=func.now())
# Constraints and indexes
__table_args__ = (
CheckConstraint(
"end_time >= start_time",
name="chk_time_range",
),
Index("idx_nasa_cache_body_id", "body_id"),
Index("idx_nasa_cache_expires", "expires_at"),
Index("idx_nasa_cache_time_range", "body_id", "start_time", "end_time"),
)
def __repr__(self):
return f"<NasaCache(cache_key='{self.cache_key}', body_id='{self.body_id}', expires_at='{self.expires_at}')>"

View File

@ -0,0 +1,27 @@
"""
Database model for orbits table
"""
from datetime import datetime
from sqlalchemy import Column, Integer, String, Float, Text, DateTime, ForeignKey, Index
from sqlalchemy.dialects.postgresql import JSONB
from app.database import Base
class Orbit(Base):
"""Orbital path data for celestial bodies"""
__tablename__ = "orbits"
id = Column(Integer, primary_key=True, index=True)
body_id = Column(Text, ForeignKey("celestial_bodies.id", ondelete="CASCADE"), nullable=False, unique=True)
points = Column(JSONB, nullable=False) # Array of {x, y, z} points
num_points = Column(Integer, nullable=False)
period_days = Column(Float, nullable=True)
color = Column(String(20), nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
__table_args__ = (
Index('idx_orbits_body_id', 'body_id'),
Index('idx_orbits_updated_at', 'updated_at'),
)

View File

@ -0,0 +1,52 @@
"""
Position ORM model - Time series data
"""
from sqlalchemy import Column, String, TIMESTAMP, BigInteger, Float, ForeignKey, CheckConstraint, Index
from sqlalchemy.sql import func
from sqlalchemy.orm import relationship
from app.database import Base
class Position(Base):
"""Celestial body position history"""
__tablename__ = "positions"
id = Column(BigInteger, primary_key=True, autoincrement=True)
body_id = Column(
String(50),
ForeignKey("celestial_bodies.id", ondelete="CASCADE"),
nullable=False,
comment="Reference to celestial_bodies.id",
)
time = Column(TIMESTAMP, nullable=False, comment="Position timestamp (UTC)")
x = Column(Float, nullable=False, comment="X coordinate (AU)")
y = Column(Float, nullable=False, comment="Y coordinate (AU)")
z = Column(Float, nullable=False, comment="Z coordinate (AU)")
vx = Column(Float, nullable=True, comment="X velocity (optional)")
vy = Column(Float, nullable=True, comment="Y velocity (optional)")
vz = Column(Float, nullable=True, comment="Z velocity (optional)")
source = Column(
String(50),
nullable=False,
default="nasa_horizons",
comment="Data source",
)
created_at = Column(TIMESTAMP, server_default=func.now())
# Relationship
body = relationship("CelestialBody", back_populates="positions")
# Constraints and indexes
__table_args__ = (
CheckConstraint(
"source IN ('nasa_horizons', 'calculated', 'user_defined', 'imported')",
name="chk_source",
),
Index("idx_positions_body_time", "body_id", "time", postgresql_using="btree"),
Index("idx_positions_time", "time"),
Index("idx_positions_body_id", "body_id"),
)
def __repr__(self):
return f"<Position(body_id='{self.body_id}', time='{self.time}', x={self.x}, y={self.y}, z={self.z})>"

View File

@ -0,0 +1,52 @@
"""
Resource ORM model - File management
"""
from sqlalchemy import Column, String, Integer, TIMESTAMP, ForeignKey, CheckConstraint, Index
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.sql import func
from sqlalchemy.orm import relationship
from app.database import Base
class Resource(Base):
"""Resource files (textures, models, icons, etc.)"""
__tablename__ = "resources"
id = Column(Integer, primary_key=True, autoincrement=True)
body_id = Column(
String(50),
ForeignKey("celestial_bodies.id", ondelete="CASCADE"),
nullable=True,
comment="Reference to celestial_bodies.id (optional)",
)
resource_type = Column(
String(50), nullable=False, comment="Resource type"
)
file_path = Column(
String(500),
nullable=False,
comment="Relative path from upload directory",
)
file_size = Column(Integer, nullable=True, comment="File size in bytes")
mime_type = Column(String(100), nullable=True, comment="MIME type")
extra_data = Column(JSONB, nullable=True, comment="Extended metadata (JSON)")
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
# Relationship
body = relationship("CelestialBody", back_populates="resources")
# Constraints and indexes
__table_args__ = (
CheckConstraint(
"resource_type IN ('texture', 'model', 'icon', 'thumbnail', 'data')",
name="chk_resource_type",
),
Index("idx_resources_body_id", "body_id"),
Index("idx_resources_type", "resource_type"),
Index("idx_resources_unique", "body_id", "resource_type", "file_path", unique=True),
)
def __repr__(self):
return f"<Resource(id={self.id}, body_id='{self.body_id}', type='{self.resource_type}', path='{self.file_path}')>"

View File

@ -0,0 +1,28 @@
"""
Role ORM model
"""
from sqlalchemy import Column, String, Integer, Text, TIMESTAMP, Index
from sqlalchemy.sql import func
from sqlalchemy.orm import relationship
from app.database import Base
from app.models.db.user import user_roles
class Role(Base):
"""User role (admin, user, etc.)"""
__tablename__ = "roles"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), unique=True, nullable=False, index=True, comment="Role name (e.g., 'admin', 'user')")
display_name = Column(String(100), nullable=False, comment="Display name")
description = Column(Text, nullable=True, comment="Role description")
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
# Relationships
users = relationship("User", secondary=user_roles, back_populates="roles")
menus = relationship("RoleMenu", back_populates="role", cascade="all, delete-orphan")
def __repr__(self):
return f"<Role(id={self.id}, name='{self.name}')>"

View File

@ -0,0 +1,38 @@
"""
StaticData ORM model - Static astronomical data
"""
from sqlalchemy import Column, String, Integer, TIMESTAMP, CheckConstraint, Index, UniqueConstraint
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.sql import func
from app.database import Base
class StaticData(Base):
"""Static astronomical data (constellations, galaxies, stars, etc.)"""
__tablename__ = "static_data"
id = Column(Integer, primary_key=True, autoincrement=True)
category = Column(
String(50), nullable=False, comment="Data category"
)
name = Column(String(200), nullable=False, comment="Name")
name_zh = Column(String(200), nullable=True, comment="Chinese name")
data = Column(JSONB, nullable=False, comment="Complete data (JSON)")
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
# Constraints and indexes
__table_args__ = (
CheckConstraint(
"category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster', 'asteroid_belt', 'kuiper_belt')",
name="chk_category",
),
UniqueConstraint("category", "name", name="uq_category_name"),
Index("idx_static_data_category", "category"),
Index("idx_static_data_name", "name"),
Index("idx_static_data_data", "data", postgresql_using="gin"), # JSONB GIN index
)
def __repr__(self):
return f"<StaticData(id={self.id}, category='{self.category}', name='{self.name}')>"

View File

@ -0,0 +1,26 @@
"""
System Settings Database Model
"""
from sqlalchemy import Column, Integer, String, Float, DateTime, Boolean, Text
from sqlalchemy.sql import func
from app.database import Base
class SystemSettings(Base):
"""System settings table - stores platform configuration parameters"""
__tablename__ = "system_settings"
id = Column(Integer, primary_key=True, autoincrement=True)
key = Column(String(100), unique=True, nullable=False, index=True, comment="Setting key")
value = Column(Text, nullable=False, comment="Setting value (JSON string or plain text)")
value_type = Column(String(20), nullable=False, default="string", comment="Value type: string, int, float, bool, json")
category = Column(String(50), nullable=False, default="general", comment="Setting category")
label = Column(String(200), nullable=False, comment="Display label")
description = Column(Text, comment="Setting description")
is_public = Column(Boolean, default=False, comment="Whether this setting is accessible to frontend")
created_at = Column(DateTime(timezone=True), server_default=func.now())
updated_at = Column(DateTime(timezone=True), onupdate=func.now())
def __repr__(self):
return f"<SystemSettings(key={self.key}, value={self.value})>"

View File

@ -0,0 +1,26 @@
from sqlalchemy import Column, Integer, String, Text, DateTime, JSON, ForeignKey, func
from sqlalchemy.orm import relationship
from app.database import Base
class Task(Base):
"""Background Task Model"""
__tablename__ = "tasks"
id = Column(Integer, primary_key=True, index=True)
task_type = Column(String(50), nullable=False, comment="Task type (e.g., 'nasa_download')")
status = Column(String(20), nullable=False, default='pending', index=True, comment="pending, running, completed, failed, cancelled")
description = Column(String(255), nullable=True)
params = Column(JSON, nullable=True, comment="Input parameters")
result = Column(JSON, nullable=True, comment="Output results")
progress = Column(Integer, default=0, comment="Progress 0-100")
error_message = Column(Text, nullable=True)
created_by = Column(Integer, nullable=True, comment="User ID")
created_at = Column(DateTime(timezone=True), server_default=func.now())
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
started_at = Column(DateTime(timezone=True), nullable=True)
completed_at = Column(DateTime(timezone=True), nullable=True)
def __repr__(self):
return f"<Task(id={self.id}, type='{self.task_type}', status='{self.status}')>"

View File

@ -0,0 +1,39 @@
"""
User ORM model
"""
from sqlalchemy import Column, String, Integer, Boolean, TIMESTAMP, ForeignKey, Table
from sqlalchemy.sql import func
from sqlalchemy.orm import relationship
from app.database import Base
# Many-to-many relationship table: users <-> roles
user_roles = Table(
'user_roles',
Base.metadata,
Column('user_id', Integer, ForeignKey('users.id', ondelete='CASCADE'), primary_key=True),
Column('role_id', Integer, ForeignKey('roles.id', ondelete='CASCADE'), primary_key=True),
Column('created_at', TIMESTAMP, server_default=func.now()),
)
class User(Base):
"""User account"""
__tablename__ = "users"
id = Column(Integer, primary_key=True, autoincrement=True)
username = Column(String(50), unique=True, nullable=False, index=True, comment="Username (unique)")
password_hash = Column(String(255), nullable=False, comment="Password hash (bcrypt)")
email = Column(String(255), nullable=True, unique=True, index=True, comment="Email address")
full_name = Column(String(100), nullable=True, comment="Full name")
is_active = Column(Boolean, default=True, nullable=False, comment="Account active status")
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
last_login_at = Column(TIMESTAMP, nullable=True, comment="Last login time")
# Relationships
roles = relationship("Role", secondary=user_roles, back_populates="users")
def __repr__(self):
return f"<User(id={self.id}, username='{self.username}')>"

View File

View File

@ -0,0 +1,42 @@
"""
JWT authentication service
"""
from datetime import datetime, timedelta
from typing import Optional
from jose import JWTError, jwt
import bcrypt
from app.config import settings
def verify_password(plain_password: str, hashed_password: str) -> bool:
"""Verify a password against a hash"""
return bcrypt.checkpw(plain_password.encode('utf-8'), hashed_password.encode('utf-8'))
def hash_password(password: str) -> str:
"""Hash a password"""
salt = bcrypt.gensalt()
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
"""Create a JWT access token"""
to_encode = data.copy()
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(minutes=settings.jwt_access_token_expire_minutes)
to_encode.update({"exp": expire})
encoded_jwt = jwt.encode(to_encode, settings.jwt_secret_key, algorithm=settings.jwt_algorithm)
return encoded_jwt
def decode_access_token(token: str) -> Optional[dict]:
"""Decode and verify a JWT access token"""
try:
payload = jwt.decode(token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm])
return payload
except JWTError:
return None

View File

@ -0,0 +1,99 @@
"""
Authentication dependencies for FastAPI
"""
from typing import Optional
from fastapi import Depends, HTTPException, status
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from sqlalchemy.orm import selectinload
from app.database import get_db
from app.models.db import User, Role
from app.services.auth import decode_access_token
# HTTP Bearer token scheme
security = HTTPBearer()
async def get_current_user(
credentials: HTTPAuthorizationCredentials = Depends(security),
db: AsyncSession = Depends(get_db)
) -> User:
"""
Get current authenticated user from JWT token
Raises:
HTTPException: If token is invalid or user not found
"""
token = credentials.credentials
# Decode token
payload = decode_access_token(token)
if payload is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid authentication credentials",
headers={"WWW-Authenticate": "Bearer"},
)
# Get user ID from token
user_id: Optional[int] = payload.get("sub")
if user_id is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid authentication credentials",
headers={"WWW-Authenticate": "Bearer"},
)
# Query user from database with roles
result = await db.execute(
select(User)
.options(selectinload(User.roles))
.where(User.id == int(user_id))
)
user = result.scalar_one_or_none()
if user is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="User not found",
headers={"WWW-Authenticate": "Bearer"},
)
if not user.is_active:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Inactive user"
)
return user
async def get_current_active_user(
current_user: User = Depends(get_current_user)
) -> User:
"""Get current active user"""
return current_user
async def require_admin(
current_user: User = Depends(get_current_user)
) -> User:
"""
Require user to have admin role
Raises:
HTTPException: If user is not admin
"""
# Check if user has admin role
is_admin = any(role.name == "admin" for role in current_user.roles)
if not is_admin:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Admin privileges required"
)
return current_user

View File

@ -0,0 +1,89 @@
"""
Simple in-memory cache for celestial data
"""
from datetime import datetime, timedelta
from typing import Optional
import logging
from app.models.celestial import CelestialBody
from app.config import settings
logger = logging.getLogger(__name__)
class CacheEntry:
"""Cache entry with expiration"""
def __init__(self, data: list[CelestialBody], ttl_days: int = 3):
self.data = data
self.created_at = datetime.utcnow()
self.expires_at = self.created_at + timedelta(days=ttl_days)
def is_expired(self) -> bool:
"""Check if cache entry is expired"""
return datetime.utcnow() > self.expires_at
class CacheService:
"""Simple in-memory cache service"""
def __init__(self):
self._cache: dict[str, CacheEntry] = {}
def _make_key(
self,
start_time: datetime | None,
end_time: datetime | None,
step: str,
) -> str:
"""Generate cache key from query parameters"""
start_str = start_time.isoformat() if start_time else "now"
end_str = end_time.isoformat() if end_time else "now"
return f"{start_str}_{end_str}_{step}"
def get(
self,
start_time: datetime | None,
end_time: datetime | None,
step: str,
) -> Optional[list[CelestialBody]]:
"""
Get cached data if available and not expired
Returns:
Cached data or None if not found/expired
"""
key = self._make_key(start_time, end_time, step)
if key in self._cache:
entry = self._cache[key]
if not entry.is_expired():
logger.info(f"Cache hit for key: {key}")
return entry.data
else:
logger.info(f"Cache expired for key: {key}")
del self._cache[key]
logger.info(f"Cache miss for key: {key}")
return None
def set(
self,
data: list[CelestialBody],
start_time: datetime | None,
end_time: datetime | None,
step: str,
):
"""Store data in cache"""
key = self._make_key(start_time, end_time, step)
self._cache[key] = CacheEntry(data, ttl_days=settings.cache_ttl_days)
logger.info(f"Cached data for key: {key}")
def clear(self):
"""Clear all cache"""
self._cache.clear()
logger.info("Cache cleared")
# Singleton instance
cache_service = CacheService()

View File

@ -0,0 +1,240 @@
"""
Cache preheating service
Loads data from database to Redis on startup
"""
import logging
from datetime import datetime, timedelta
from typing import List, Dict, Any
from app.database import get_db
from app.services.redis_cache import redis_cache, make_cache_key, get_ttl_seconds
from app.services.db_service import celestial_body_service, position_service
logger = logging.getLogger(__name__)
async def preheat_current_positions():
"""
Preheat current positions from database to Redis
Loads the most recent single-point position for all bodies
Strategy: Get the latest position for each body (should be current hour or most recent)
"""
logger.info("=" * 60)
logger.info("Starting cache preheat: Current positions")
logger.info("=" * 60)
try:
async for db in get_db():
# Get all celestial bodies
all_bodies = await celestial_body_service.get_all_bodies(db)
logger.info(f"Found {len(all_bodies)} celestial bodies")
# Get current time rounded to the hour
now = datetime.utcnow()
current_hour = now.replace(minute=0, second=0, microsecond=0)
# Define time window: current hour ± 1 hour
start_window = current_hour - timedelta(hours=1)
end_window = current_hour + timedelta(hours=1)
# Collect positions for all bodies
bodies_data = []
successful_bodies = 0
for body in all_bodies:
try:
# Get position closest to current hour
recent_positions = await position_service.get_positions(
body_id=body.id,
start_time=start_window,
end_time=end_window,
session=db
)
if recent_positions and len(recent_positions) > 0:
# Use the position closest to current hour
# Find the one with time closest to current_hour
closest_pos = min(
recent_positions,
key=lambda p: abs((p.time - current_hour).total_seconds())
)
body_dict = {
"id": body.id,
"name": body.name,
"name_zh": body.name_zh,
"type": body.type,
"description": body.description,
"positions": [{
"time": closest_pos.time.isoformat(),
"x": closest_pos.x,
"y": closest_pos.y,
"z": closest_pos.z,
}]
}
bodies_data.append(body_dict)
successful_bodies += 1
logger.debug(f" ✓ Loaded position for {body.name} at {closest_pos.time}")
else:
logger.warning(f" ⚠ No position found for {body.name} near {current_hour}")
except Exception as e:
logger.warning(f" ✗ Failed to load position for {body.name}: {e}")
continue
# Write to Redis if we have data
if bodies_data:
# Cache key for current hour
time_str = current_hour.isoformat()
redis_key = make_cache_key("positions", time_str, time_str, "1h")
ttl = get_ttl_seconds("current_positions")
success = await redis_cache.set(redis_key, bodies_data, ttl)
if success:
logger.info(f"✅ Preheated current positions: {successful_bodies}/{len(all_bodies)} bodies")
logger.info(f" Time: {current_hour}")
logger.info(f" Redis key: {redis_key}")
logger.info(f" TTL: {ttl}s ({ttl // 3600}h)")
else:
logger.error("❌ Failed to write to Redis")
else:
logger.warning("⚠ No position data available to preheat")
break # Only process first database session
except Exception as e:
logger.error(f"❌ Cache preheat failed: {e}")
import traceback
traceback.print_exc()
logger.info("=" * 60)
async def preheat_historical_positions(days: int = 3):
"""
Preheat historical positions for timeline mode
Strategy: For each day, cache the position at 00:00:00 UTC (single point per day)
Args:
days: Number of days to preheat (default: 3)
"""
logger.info("=" * 60)
logger.info(f"Starting cache preheat: Historical positions ({days} days)")
logger.info("=" * 60)
try:
async for db in get_db():
# Get all celestial bodies
all_bodies = await celestial_body_service.get_all_bodies(db)
logger.info(f"Found {len(all_bodies)} celestial bodies")
# Define time window
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=days)
logger.info(f"Time range: {start_date.date()} to {end_date.date()}")
# Preheat each day separately (single point at 00:00:00 per day)
cached_days = 0
for day_offset in range(days):
# Target time: midnight (00:00:00) of this day
target_day = start_date + timedelta(days=day_offset)
target_midnight = target_day.replace(hour=0, minute=0, second=0, microsecond=0)
# Search window: ±30 minutes around midnight
search_start = target_midnight - timedelta(minutes=30)
search_end = target_midnight + timedelta(minutes=30)
# Collect positions for all bodies for this specific time
bodies_data = []
successful_bodies = 0
for body in all_bodies:
try:
# Query positions near midnight of this day
positions = await position_service.get_positions(
body_id=body.id,
start_time=search_start,
end_time=search_end,
session=db
)
if positions and len(positions) > 0:
# Find the position closest to midnight
closest_pos = min(
positions,
key=lambda p: abs((p.time - target_midnight).total_seconds())
)
body_dict = {
"id": body.id,
"name": body.name,
"name_zh": body.name_zh,
"type": body.type,
"description": body.description,
"positions": [
{
"time": closest_pos.time.isoformat(),
"x": closest_pos.x,
"y": closest_pos.y,
"z": closest_pos.z,
}
]
}
bodies_data.append(body_dict)
successful_bodies += 1
except Exception as e:
logger.warning(f" ✗ Failed to load {body.name} for {target_midnight.date()}: {e}")
continue
# Write to Redis if we have complete data
if bodies_data and successful_bodies == len(all_bodies):
# Cache key for this specific midnight timestamp
time_str = target_midnight.isoformat()
redis_key = make_cache_key("positions", time_str, time_str, "1d")
ttl = get_ttl_seconds("historical_positions")
success = await redis_cache.set(redis_key, bodies_data, ttl)
if success:
cached_days += 1
logger.info(f" ✓ Cached {target_midnight.date()} 00:00 UTC: {successful_bodies} bodies")
else:
logger.warning(f" ✗ Failed to cache {target_midnight.date()}")
else:
logger.warning(f" ⚠ Incomplete data for {target_midnight.date()}: {successful_bodies}/{len(all_bodies)} bodies")
logger.info(f"✅ Preheated {cached_days}/{days} days of historical data")
break # Only process first database session
except Exception as e:
logger.error(f"❌ Historical cache preheat failed: {e}")
import traceback
traceback.print_exc()
logger.info("=" * 60)
async def preheat_all_caches():
"""
Preheat all caches on startup
Priority:
1. Current positions (most important)
2. Historical positions for timeline (3 days)
"""
logger.info("")
logger.info("🔥 Starting full cache preheat...")
logger.info("")
# 1. Preheat current positions
await preheat_current_positions()
# 2. Preheat historical positions (3 days)
await preheat_historical_positions(days=3)
logger.info("")
logger.info("🔥 Cache preheat completed!")
logger.info("")

View File

@ -0,0 +1,104 @@
import json
import time
import logging
from typing import List, Dict
from sqlalchemy.ext.asyncio import AsyncSession
from app.services.redis_cache import redis_cache
from app.services.system_settings_service import system_settings_service
logger = logging.getLogger(__name__)
class DanmakuService:
def __init__(self):
self.redis_key = "cosmo:danmaku:stream"
self.default_ttl = 86400 # 24 hours fallback
async def get_ttl(self, db: AsyncSession) -> int:
"""Fetch TTL from system settings or use default"""
try:
setting = await system_settings_service.get_setting("danmaku_ttl", db)
if setting:
val = int(setting.value)
# logger.info(f"Using configured danmaku_ttl: {val}")
return val
except Exception as e:
logger.error(f"Failed to fetch danmaku_ttl: {e}")
return self.default_ttl
async def add_danmaku(self, user_id: int, username: str, text: str, db: AsyncSession) -> Dict:
"""Add a new danmaku message"""
# Validate length (double check server side)
if len(text) > 20:
text = text[:20]
now = time.time()
ttl = await self.get_ttl(db)
expire_time = now - ttl
logger.info(f"Adding danmaku: '{text}' at {now}, ttl={ttl}, expire_threshold={expire_time}")
# Create message object
# Add unique timestamp/random to value to ensure uniqueness in Set if user spams same msg?
# Actually ZSET handles unique values. If same user sends "Hi" twice, it updates score.
# To allow same msg multiple times, we can append a unique ID or timestamp to the JSON.
message = {
"uid": str(user_id),
"username": username,
"text": text,
"ts": now,
"id": f"{user_id}_{now}" # Unique ID for React keys
}
serialized = json.dumps(message)
# 1. Remove expired messages first
# ZREMRANGEBYSCORE key -inf (now - ttl)
if redis_cache.client:
try:
# Clean up old
await redis_cache.client.zremrangebyscore(self.redis_key, 0, expire_time)
# Add new
await redis_cache.client.zadd(self.redis_key, {serialized: now})
# Optional: Set key expiry to max TTL just in case (but ZADD keeps it alive)
await redis_cache.client.expire(self.redis_key, ttl)
logger.info(f"Danmaku added by {username}: {text}")
return message
except Exception as e:
logger.error(f"Redis error adding danmaku: {e}")
raise e
else:
logger.warning("Redis not connected, danmaku lost")
return message
async def get_active_danmaku(self, db: AsyncSession) -> List[Dict]:
"""Get all active danmaku messages"""
now = time.time()
ttl = await self.get_ttl(db)
min_score = now - ttl
if redis_cache.client:
try:
# Get messages from (now - ttl) to +inf
# ZRANGEBYSCORE key min max
results = await redis_cache.client.zrangebyscore(self.redis_key, min_score, "+inf")
logger.debug(f"Fetching danmaku: found {len(results)} messages (since {min_score})")
messages = []
for res in results:
try:
messages.append(json.loads(res))
except json.JSONDecodeError:
continue
return messages
except Exception as e:
logger.error(f"Redis error getting danmaku: {e}")
return []
return []
danmaku_service = DanmakuService()

View File

@ -0,0 +1,642 @@
"""
Database service layer for celestial data operations
"""
from typing import List, Optional, Dict, Any
from datetime import datetime
from sqlalchemy import select, and_, delete
from sqlalchemy.ext.asyncio import AsyncSession
import logging
from app.models.db import CelestialBody, Position, StaticData, NasaCache, Resource
from app.database import AsyncSessionLocal
logger = logging.getLogger(__name__)
class CelestialBodyService:
"""Service for celestial body operations"""
@staticmethod
async def get_all_bodies(
session: Optional[AsyncSession] = None,
body_type: Optional[str] = None
) -> List[CelestialBody]:
"""Get all celestial bodies, optionally filtered by type"""
async def _query(s: AsyncSession):
query = select(CelestialBody)
if body_type:
query = query.where(CelestialBody.type == body_type)
result = await s.execute(query.order_by(CelestialBody.name))
return result.scalars().all()
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def get_body_by_id(
body_id: str,
session: Optional[AsyncSession] = None
) -> Optional[CelestialBody]:
"""Get a celestial body by ID"""
async def _query(s: AsyncSession):
result = await s.execute(
select(CelestialBody).where(CelestialBody.id == body_id)
)
return result.scalar_one_or_none()
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def create_body(
body_data: Dict[str, Any],
session: Optional[AsyncSession] = None
) -> CelestialBody:
"""Create a new celestial body"""
async def _create(s: AsyncSession):
body = CelestialBody(**body_data)
s.add(body)
await s.commit()
await s.refresh(body)
return body
if session:
return await _create(session)
else:
async with AsyncSessionLocal() as s:
return await _create(s)
@staticmethod
async def update_body(
body_id: str,
update_data: Dict[str, Any],
session: Optional[AsyncSession] = None
) -> Optional[CelestialBody]:
"""Update a celestial body"""
async def _update(s: AsyncSession):
# Query the body
result = await s.execute(
select(CelestialBody).where(CelestialBody.id == body_id)
)
body = result.scalar_one_or_none()
if not body:
return None
# Update fields
for key, value in update_data.items():
if hasattr(body, key):
setattr(body, key, value)
await s.commit()
await s.refresh(body)
return body
if session:
return await _update(session)
else:
async with AsyncSessionLocal() as s:
return await _update(s)
@staticmethod
async def delete_body(
body_id: str,
session: Optional[AsyncSession] = None
) -> bool:
"""Delete a celestial body"""
async def _delete(s: AsyncSession):
result = await s.execute(
select(CelestialBody).where(CelestialBody.id == body_id)
)
body = result.scalar_one_or_none()
if not body:
return False
await s.delete(body)
await s.commit()
return True
if session:
return await _delete(session)
else:
async with AsyncSessionLocal() as s:
return await _delete(s)
class PositionService:
"""Service for position data operations"""
@staticmethod
async def save_positions(
body_id: str,
positions: List[Dict[str, Any]],
source: str = "nasa_horizons",
session: Optional[AsyncSession] = None
) -> int:
"""Save multiple position records for a celestial body (upsert: insert or update if exists)"""
async def _save(s: AsyncSession):
from sqlalchemy.dialects.postgresql import insert
count = 0
for pos_data in positions:
# Use PostgreSQL's INSERT ... ON CONFLICT to handle duplicates
stmt = insert(Position).values(
body_id=body_id,
time=pos_data["time"],
x=pos_data["x"],
y=pos_data["y"],
z=pos_data["z"],
vx=pos_data.get("vx"),
vy=pos_data.get("vy"),
vz=pos_data.get("vz"),
source=source
)
# On conflict (body_id, time), update the existing record
stmt = stmt.on_conflict_do_update(
index_elements=['body_id', 'time'],
set_={
'x': pos_data["x"],
'y': pos_data["y"],
'z': pos_data["z"],
'vx': pos_data.get("vx"),
'vy': pos_data.get("vy"),
'vz': pos_data.get("vz"),
'source': source
}
)
await s.execute(stmt)
count += 1
await s.commit()
return count
if session:
return await _save(session)
else:
async with AsyncSessionLocal() as s:
return await _save(s)
@staticmethod
async def get_positions(
body_id: str,
start_time: Optional[datetime] = None,
end_time: Optional[datetime] = None,
session: Optional[AsyncSession] = None
) -> List[Position]:
"""Get positions for a celestial body within a time range"""
async def _query(s: AsyncSession):
query = select(Position).where(Position.body_id == body_id)
if start_time and end_time:
query = query.where(
and_(
Position.time >= start_time,
Position.time <= end_time
)
)
elif start_time:
query = query.where(Position.time >= start_time)
elif end_time:
query = query.where(Position.time <= end_time)
query = query.order_by(Position.time)
result = await s.execute(query)
return result.scalars().all()
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def get_positions_in_range(
body_id: str,
start_time: datetime,
end_time: datetime,
session: Optional[AsyncSession] = None
) -> List[Position]:
"""Alias for get_positions with required time range"""
return await PositionService.get_positions(body_id, start_time, end_time, session)
@staticmethod
async def save_position(
body_id: str,
time: datetime,
x: float,
y: float,
z: float,
source: str = "nasa_horizons",
vx: Optional[float] = None,
vy: Optional[float] = None,
vz: Optional[float] = None,
session: Optional[AsyncSession] = None
) -> Position:
"""Save a single position record"""
async def _save(s: AsyncSession):
# Check if position already exists
existing = await s.execute(
select(Position).where(
and_(
Position.body_id == body_id,
Position.time == time
)
)
)
existing_pos = existing.scalar_one_or_none()
if existing_pos:
# Update existing position
existing_pos.x = x
existing_pos.y = y
existing_pos.z = z
existing_pos.vx = vx
existing_pos.vy = vy
existing_pos.vz = vz
existing_pos.source = source
await s.commit()
await s.refresh(existing_pos)
return existing_pos
else:
# Create new position
position = Position(
body_id=body_id,
time=time,
x=x,
y=y,
z=z,
vx=vx,
vy=vy,
vz=vz,
source=source
)
s.add(position)
await s.commit()
await s.refresh(position)
return position
if session:
return await _save(session)
else:
async with AsyncSessionLocal() as s:
return await _save(s)
@staticmethod
async def delete_old_positions(
before_time: datetime,
session: Optional[AsyncSession] = None
) -> int:
"""Delete position records older than specified time"""
async def _delete(s: AsyncSession):
result = await s.execute(
delete(Position).where(Position.time < before_time)
)
await s.commit()
return result.rowcount
if session:
return await _delete(session)
else:
async with AsyncSessionLocal() as s:
return await _delete(s)
@staticmethod
async def get_available_dates(
body_id: str,
start_time: datetime,
end_time: datetime,
session: Optional[AsyncSession] = None
) -> List[datetime]:
"""Get all dates that have position data for a specific body within a time range"""
async def _query(s: AsyncSession):
from sqlalchemy import func, Date
# Query distinct dates (truncate to date)
query = select(func.date(Position.time)).where(
and_(
Position.body_id == body_id,
Position.time >= start_time,
Position.time <= end_time
)
).distinct().order_by(func.date(Position.time))
result = await s.execute(query)
dates = [row[0] for row in result]
return dates
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
class NasaCacheService:
"""Service for NASA API response caching"""
@staticmethod
async def get_cached_response(
body_id: str,
start_time: Optional[datetime],
end_time: Optional[datetime],
step: str,
session: Optional[AsyncSession] = None
) -> Optional[Dict[str, Any]]:
"""Get cached NASA API response"""
async def _query(s: AsyncSession):
# Remove timezone info for comparison with database TIMESTAMP WITHOUT TIME ZONE
start_naive = start_time.replace(tzinfo=None) if start_time else None
end_naive = end_time.replace(tzinfo=None) if end_time else None
now_naive = datetime.utcnow()
result = await s.execute(
select(NasaCache).where(
and_(
NasaCache.body_id == body_id,
NasaCache.start_time == start_naive,
NasaCache.end_time == end_naive,
NasaCache.step == step,
NasaCache.expires_at > now_naive
)
)
)
cache = result.scalar_one_or_none()
return cache.data if cache else None
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def save_response(
body_id: str,
start_time: Optional[datetime],
end_time: Optional[datetime],
step: str,
response_data: Dict[str, Any],
ttl_days: int = 7,
session: Optional[AsyncSession] = None
) -> NasaCache:
"""Save NASA API response to cache (upsert: insert or update if exists)"""
async def _save(s: AsyncSession):
from datetime import timedelta
from sqlalchemy.dialects.postgresql import insert
# Remove timezone info for database storage (TIMESTAMP WITHOUT TIME ZONE)
start_naive = start_time.replace(tzinfo=None) if start_time else None
end_naive = end_time.replace(tzinfo=None) if end_time else None
now_naive = datetime.utcnow()
# Generate cache key
start_str = start_time.isoformat() if start_time else "null"
end_str = end_time.isoformat() if end_time else "null"
cache_key = f"{body_id}:{start_str}:{end_str}:{step}"
# Use PostgreSQL's INSERT ... ON CONFLICT to handle duplicates atomically
stmt = insert(NasaCache).values(
cache_key=cache_key,
body_id=body_id,
start_time=start_naive,
end_time=end_naive,
step=step,
data=response_data,
expires_at=now_naive + timedelta(days=ttl_days)
)
# On conflict, update the existing record
stmt = stmt.on_conflict_do_update(
index_elements=['cache_key'],
set_={
'data': response_data,
'created_at': now_naive,
'expires_at': now_naive + timedelta(days=ttl_days)
}
).returning(NasaCache)
result = await s.execute(stmt)
cache = result.scalar_one()
await s.commit()
await s.refresh(cache)
return cache
if session:
return await _save(session)
else:
async with AsyncSessionLocal() as s:
return await _save(s)
class StaticDataService:
"""Service for static data operations"""
@staticmethod
async def get_all_items(
session: Optional[AsyncSession] = None
) -> List[StaticData]:
"""Get all static data items"""
async def _query(s: AsyncSession):
result = await s.execute(
select(StaticData).order_by(StaticData.category, StaticData.name)
)
return result.scalars().all()
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def create_static(
data: Dict[str, Any],
session: Optional[AsyncSession] = None
) -> StaticData:
"""Create new static data"""
async def _create(s: AsyncSession):
item = StaticData(**data)
s.add(item)
await s.commit()
await s.refresh(item)
return item
if session:
return await _create(session)
else:
async with AsyncSessionLocal() as s:
return await _create(s)
@staticmethod
async def update_static(
item_id: int,
update_data: Dict[str, Any],
session: Optional[AsyncSession] = None
) -> Optional[StaticData]:
"""Update static data"""
async def _update(s: AsyncSession):
result = await s.execute(
select(StaticData).where(StaticData.id == item_id)
)
item = result.scalar_one_or_none()
if not item:
return None
for key, value in update_data.items():
if hasattr(item, key):
setattr(item, key, value)
await s.commit()
await s.refresh(item)
return item
if session:
return await _update(session)
else:
async with AsyncSessionLocal() as s:
return await _update(s)
@staticmethod
async def delete_static(
item_id: int,
session: Optional[AsyncSession] = None
) -> bool:
"""Delete static data"""
async def _delete(s: AsyncSession):
result = await s.execute(
select(StaticData).where(StaticData.id == item_id)
)
item = result.scalar_one_or_none()
if not item:
return False
await s.delete(item)
await s.commit()
return True
if session:
return await _delete(session)
else:
async with AsyncSessionLocal() as s:
return await _delete(s)
@staticmethod
async def get_by_category(
category: str,
session: Optional[AsyncSession] = None
) -> List[StaticData]:
"""Get all static data items for a category"""
async def _query(s: AsyncSession):
result = await s.execute(
select(StaticData)
.where(StaticData.category == category)
.order_by(StaticData.name)
)
return result.scalars().all()
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def get_all_categories(
session: Optional[AsyncSession] = None
) -> List[str]:
"""Get all available categories"""
async def _query(s: AsyncSession):
result = await s.execute(
select(StaticData.category).distinct()
)
return [row[0] for row in result]
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
class ResourceService:
"""Service for resource file management"""
@staticmethod
async def create_resource(
resource_data: Dict[str, Any],
session: Optional[AsyncSession] = None
) -> Resource:
"""Create a new resource record"""
async def _create(s: AsyncSession):
resource = Resource(**resource_data)
s.add(resource)
await s.commit()
await s.refresh(resource)
return resource
if session:
return await _create(session)
else:
async with AsyncSessionLocal() as s:
return await _create(s)
@staticmethod
async def get_resources_by_body(
body_id: str,
resource_type: Optional[str] = None,
session: Optional[AsyncSession] = None
) -> List[Resource]:
"""Get all resources for a celestial body"""
async def _query(s: AsyncSession):
query = select(Resource).where(Resource.body_id == body_id)
if resource_type:
query = query.where(Resource.resource_type == resource_type)
result = await s.execute(query.order_by(Resource.created_at))
return result.scalars().all()
if session:
return await _query(session)
else:
async with AsyncSessionLocal() as s:
return await _query(s)
@staticmethod
async def delete_resource(
resource_id: int,
session: Optional[AsyncSession] = None
) -> bool:
"""Delete a resource record"""
async def _delete(s: AsyncSession):
result = await s.execute(
select(Resource).where(Resource.id == resource_id)
)
resource = result.scalar_one_or_none()
if resource:
await s.delete(resource)
await s.commit()
return True
return False
if session:
return await _delete(session)
else:
async with AsyncSessionLocal() as s:
return await _delete(s)
# Export service instances
celestial_body_service = CelestialBodyService()
position_service = PositionService()
nasa_cache_service = NasaCacheService()
static_data_service = StaticDataService()
resource_service = ResourceService()

View File

@ -0,0 +1,238 @@
"""
NASA JPL Horizons data query service
"""
from datetime import datetime, timedelta
from astroquery.jplhorizons import Horizons
from astropy.time import Time
import logging
import re
import httpx
from app.models.celestial import Position, CelestialBody
logger = logging.getLogger(__name__)
class HorizonsService:
"""Service for querying NASA JPL Horizons system"""
def __init__(self):
"""Initialize the service"""
self.location = "@sun" # Heliocentric coordinates
async def get_object_data_raw(self, body_id: str) -> str:
"""
Get raw object data (terminal style text) from Horizons
Args:
body_id: JPL Horizons ID
Returns:
Raw text response from NASA
"""
url = "https://ssd.jpl.nasa.gov/api/horizons.api"
# Ensure ID is quoted for COMMAND
cmd_val = f"'{body_id}'" if not body_id.startswith("'") else body_id
params = {
"format": "text",
"COMMAND": cmd_val,
"OBJ_DATA": "YES",
"MAKE_EPHEM": "NO",
"EPHEM_TYPE": "VECTORS",
"CENTER": "@sun"
}
try:
async with httpx.AsyncClient() as client:
logger.info(f"Fetching raw data for body {body_id}")
response = await client.get(url, params=params, timeout=30.0)
if response.status_code != 200:
raise Exception(f"NASA API returned status {response.status_code}")
return response.text
except Exception as e:
logger.error(f"Error fetching raw data for {body_id}: {str(e)}")
raise
def get_body_positions(
self,
body_id: str,
start_time: datetime | None = None,
end_time: datetime | None = None,
step: str = "1d",
) -> list[Position]:
"""
Get positions for a celestial body over a time range
Args:
body_id: JPL Horizons ID (e.g., '-31' for Voyager 1)
start_time: Start datetime (default: now)
end_time: End datetime (default: now)
step: Time step (e.g., '1d' for 1 day, '1h' for 1 hour)
Returns:
List of Position objects
"""
try:
# Set default times
if start_time is None:
start_time = datetime.utcnow()
if end_time is None:
end_time = start_time
# Convert to astropy Time objects for single point queries
# For ranges, use ISO format strings which Horizons prefers
# Create time range
if start_time == end_time:
# Single time point - use JD format
epochs = Time(start_time).jd
else:
# Time range - use ISO format (YYYY-MM-DD HH:MM)
# Horizons expects this format for ranges
start_str = start_time.strftime('%Y-%m-%d %H:%M')
end_str = end_time.strftime('%Y-%m-%d %H:%M')
epochs = {"start": start_str, "stop": end_str, "step": step}
logger.info(f"Querying Horizons for body {body_id} from {start_time} to {end_time}")
# Query JPL Horizons
obj = Horizons(id=body_id, location=self.location, epochs=epochs)
vectors = obj.vectors()
# Extract positions
positions = []
if isinstance(epochs, dict):
# Multiple time points
for i in range(len(vectors)):
pos = Position(
time=Time(vectors["datetime_jd"][i], format="jd").datetime,
x=float(vectors["x"][i]),
y=float(vectors["y"][i]),
z=float(vectors["z"][i]),
)
positions.append(pos)
else:
# Single time point
pos = Position(
time=start_time,
x=float(vectors["x"][0]),
y=float(vectors["y"][0]),
z=float(vectors["z"][0]),
)
positions.append(pos)
logger.info(f"Successfully retrieved {len(positions)} positions for body {body_id}")
return positions
except Exception as e:
logger.error(f"Error querying Horizons for body {body_id}: {str(e)}")
raise
def search_body_by_name(self, name: str) -> dict:
"""
Search for a celestial body by name in NASA Horizons database
Args:
name: Body name or ID to search for
Returns:
Dictionary with search results:
{
"success": bool,
"id": str (extracted or input),
"name": str (short name),
"full_name": str (complete name from NASA),
"error": str (if failed)
}
"""
try:
logger.info(f"Searching Horizons for: {name}")
# Try to query with the name
obj = Horizons(id=name, location=self.location)
vec = obj.vectors()
# Get the full target name from response
targetname = vec['targetname'][0]
logger.info(f"Found target: {targetname}")
# Extract ID and name from targetname
# Possible formats:
# 1. "136472 Makemake (2005 FY9)" - ID at start
# 2. "Voyager 1 (spacecraft) (-31)" - ID in parentheses
# 3. "Mars (499)" - ID in parentheses
# 4. "Parker Solar Probe (spacecraft)" - no ID
# 5. "Hubble Space Telescope (spacecra" - truncated
numeric_id = None
short_name = None
# Check if input is already a numeric ID
input_is_numeric = re.match(r'^-?\d+$', name.strip())
if input_is_numeric:
numeric_id = name.strip()
# Extract name from targetname
# Remove leading ID if present
name_part = re.sub(r'^\d+\s+', '', targetname)
short_name = name_part.split('(')[0].strip()
else:
# Try to extract ID from start of targetname (format: "136472 Makemake")
start_match = re.match(r'^(\d+)\s+(.+)', targetname)
if start_match:
numeric_id = start_match.group(1)
short_name = start_match.group(2).split('(')[0].strip()
else:
# Try to extract ID from parentheses (format: "Name (-31)" or "Name (499)")
id_match = re.search(r'\((-?\d+)\)', targetname)
if id_match:
numeric_id = id_match.group(1)
short_name = targetname.split('(')[0].strip()
else:
# No numeric ID found, use input name as ID
numeric_id = name
short_name = targetname.split('(')[0].strip()
return {
"success": True,
"id": numeric_id,
"name": short_name,
"full_name": targetname,
"error": None
}
except Exception as e:
error_msg = str(e)
logger.error(f"Error searching for {name}: {error_msg}")
# Check for specific error types
if 'Ambiguous target name' in error_msg:
return {
"success": False,
"id": None,
"name": None,
"full_name": None,
"error": "名称不唯一,请提供更具体的名称或 JPL Horizons ID"
}
elif 'No matches found' in error_msg or 'Unknown target' in error_msg:
return {
"success": False,
"id": None,
"name": None,
"full_name": None,
"error": "未找到匹配的天体,请检查名称或 ID"
}
else:
return {
"success": False,
"id": None,
"name": None,
"full_name": None,
"error": f"查询失败: {error_msg}"
}
# Singleton instance
horizons_service = HorizonsService()

View File

@ -0,0 +1,120 @@
import logging
import asyncio
from datetime import datetime
from sqlalchemy.ext.asyncio import AsyncSession
from typing import List
from app.database import AsyncSessionLocal
from app.services.task_service import task_service
from app.services.db_service import celestial_body_service, position_service
from app.services.horizons import horizons_service
logger = logging.getLogger(__name__)
async def download_positions_task(task_id: int, body_ids: List[str], dates: List[str]):
"""
Background task worker for downloading NASA positions
"""
logger.info(f"Task {task_id}: Starting download for {len(body_ids)} bodies and {len(dates)} dates")
async with AsyncSessionLocal() as db:
try:
# Mark as running
await task_service.update_progress(db, task_id, 0, "running")
total_operations = len(body_ids) * len(dates)
current_op = 0
success_count = 0
failed_count = 0
results = []
for body_id in body_ids:
# Check body
body = await celestial_body_service.get_body_by_id(body_id, db)
if not body:
results.append({"body_id": body_id, "error": "Body not found"})
failed_count += len(dates)
current_op += len(dates)
continue
body_result = {
"body_id": body_id,
"body_name": body.name,
"dates": []
}
for date_str in dates:
try:
target_date = datetime.strptime(date_str, "%Y-%m-%d")
# Check existing
existing = await position_service.get_positions(
body_id=body_id,
start_time=target_date,
end_time=target_date.replace(hour=23, minute=59, second=59),
session=db
)
if existing and len(existing) > 0:
body_result["dates"].append({"date": date_str, "status": "skipped"})
success_count += 1
else:
# Download
positions = horizons_service.get_body_positions(
body_id=body_id,
start_time=target_date,
end_time=target_date,
step="1d"
)
if positions and len(positions) > 0:
pos_data = [{
"time": target_date,
"x": positions[0].x,
"y": positions[0].y,
"z": positions[0].z,
"vx": getattr(positions[0], 'vx', None),
"vy": getattr(positions[0], 'vy', None),
"vz": getattr(positions[0], 'vz', None),
}]
await position_service.save_positions(
body_id=body_id,
positions=pos_data,
source="nasa_horizons",
session=db
)
body_result["dates"].append({"date": date_str, "status": "success"})
success_count += 1
else:
body_result["dates"].append({"date": date_str, "status": "failed", "error": "No data"})
failed_count += 1
# Sleep slightly to prevent rate limiting and allow context switching
# await asyncio.sleep(0.1)
except Exception as e:
logger.error(f"Error processing {body_id} on {date_str}: {e}")
body_result["dates"].append({"date": date_str, "status": "error", "error": str(e)})
failed_count += 1
# Update progress
current_op += 1
progress = int((current_op / total_operations) * 100)
# Only update DB every 5% or so to reduce load, but update Redis frequently
# For now, update every item for simplicity
await task_service.update_progress(db, task_id, progress)
results.append(body_result)
# Complete
final_result = {
"total_success": success_count,
"total_failed": failed_count,
"details": results
}
await task_service.complete_task(db, task_id, final_result)
logger.info(f"Task {task_id} completed successfully")
except Exception as e:
logger.error(f"Task {task_id} failed critically: {e}")
await task_service.fail_task(db, task_id, str(e))

View File

@ -0,0 +1,189 @@
"""
Service for managing orbital data
"""
from datetime import datetime, timedelta
from typing import List, Dict, Optional
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.dialects.postgresql import insert
from app.models.db.orbit import Orbit
from app.models.db.celestial_body import CelestialBody
from app.services.horizons import HorizonsService
import logging
logger = logging.getLogger(__name__)
class OrbitService:
"""Service for orbit CRUD operations and generation"""
@staticmethod
async def get_orbit(body_id: str, session: AsyncSession) -> Optional[Orbit]:
"""Get orbit data for a specific body"""
result = await session.execute(
select(Orbit).where(Orbit.body_id == body_id)
)
return result.scalar_one_or_none()
@staticmethod
async def get_all_orbits(
session: AsyncSession,
body_type: Optional[str] = None
) -> List[Orbit]:
"""Get all orbits, optionally filtered by body type"""
if body_type:
# Join with celestial_bodies to filter by type
query = (
select(Orbit)
.join(CelestialBody, Orbit.body_id == CelestialBody.id)
.where(CelestialBody.type == body_type)
)
else:
query = select(Orbit)
result = await session.execute(query)
return list(result.scalars().all())
@staticmethod
async def save_orbit(
body_id: str,
points: List[Dict[str, float]],
num_points: int,
period_days: Optional[float],
color: Optional[str],
session: AsyncSession
) -> Orbit:
"""Save or update orbit data using UPSERT"""
stmt = insert(Orbit).values(
body_id=body_id,
points=points,
num_points=num_points,
period_days=period_days,
color=color,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow()
)
# On conflict, update all fields
stmt = stmt.on_conflict_do_update(
index_elements=['body_id'],
set_={
'points': points,
'num_points': num_points,
'period_days': period_days,
'color': color,
'updated_at': datetime.utcnow()
}
)
await session.execute(stmt)
await session.commit()
# Fetch and return the saved orbit
return await OrbitService.get_orbit(body_id, session)
@staticmethod
async def delete_orbit(body_id: str, session: AsyncSession) -> bool:
"""Delete orbit data for a specific body"""
orbit = await OrbitService.get_orbit(body_id, session)
if orbit:
await session.delete(orbit)
await session.commit()
return True
return False
@staticmethod
async def generate_orbit(
body_id: str,
body_name: str,
period_days: float,
color: Optional[str],
session: AsyncSession,
horizons_service: HorizonsService
) -> Orbit:
"""
Generate complete orbital data for a celestial body
Args:
body_id: JPL Horizons ID
body_name: Display name (for logging)
period_days: Orbital period in days
color: Hex color for orbit line
session: Database session
horizons_service: NASA Horizons API service
Returns:
Generated Orbit object
"""
logger.info(f"🌌 Generating orbit for {body_name} (period: {period_days:.1f} days)")
# Calculate number of sample points
# Use at least 100 points for smooth ellipse
# For very long periods, cap at 1000 to avoid excessive data
MIN_POINTS = 100
MAX_POINTS = 1000
if period_days < 3650: # < 10 years
# For planets: aim for ~1 point per day, minimum 100
num_points = max(MIN_POINTS, min(int(period_days), 365))
else: # >= 10 years
# For outer planets and dwarf planets: monthly sampling
num_points = min(int(period_days / 30), MAX_POINTS)
# Calculate step size in days
step_days = max(1, int(period_days / num_points))
logger.info(f" 📊 Sampling {num_points} points (every {step_days} days)")
# Query NASA Horizons for complete orbital period
# For very long periods (>150 years), start from a historical date
# to ensure we can get complete orbit data within NASA's range
if period_days > 150 * 365: # More than 150 years
# Start from year 1900 for historical data
start_time = datetime(1900, 1, 1)
end_time = start_time + timedelta(days=period_days)
logger.info(f" 📅 Using historical date range (1900-{end_time.year}) for long-period orbit")
else:
start_time = datetime.utcnow()
end_time = start_time + timedelta(days=period_days)
try:
# Get positions from Horizons (synchronous call)
positions = horizons_service.get_body_positions(
body_id=body_id,
start_time=start_time,
end_time=end_time,
step=f"{step_days}d"
)
if not positions or len(positions) == 0:
raise ValueError(f"No position data returned for {body_name}")
# Convert Position objects to list of dicts
points = [
{"x": pos.x, "y": pos.y, "z": pos.z}
for pos in positions
]
logger.info(f" ✅ Retrieved {len(points)} orbital points")
# Save to database
orbit = await OrbitService.save_orbit(
body_id=body_id,
points=points,
num_points=len(points),
period_days=period_days,
color=color,
session=session
)
logger.info(f" 💾 Saved orbit for {body_name}")
return orbit
except Exception as e:
logger.error(f" ❌ Failed to generate orbit for {body_name}: {e}")
raise
orbit_service = OrbitService()

View File

@ -0,0 +1,204 @@
"""
Redis cache service
Provides three-layer caching:
L1: In-memory cache (process-level, TTL: 10min)
L2: Redis cache (shared, TTL: 1h-7days)
L3: Database (persistent)
"""
import redis.asyncio as redis
from typing import Any, Optional
import json
import logging
from datetime import datetime, timedelta
from app.config import settings
logger = logging.getLogger(__name__)
class RedisCache:
"""Redis cache manager"""
def __init__(self):
self.client: Optional[redis.Redis] = None
self._connected = False
async def connect(self):
"""Connect to Redis"""
try:
self.client = redis.from_url(
settings.redis_url,
encoding="utf-8",
decode_responses=True,
max_connections=settings.redis_max_connections,
)
# Test connection
await self.client.ping()
self._connected = True
logger.info(f"✓ Connected to Redis at {settings.redis_host}:{settings.redis_port}")
except Exception as e:
logger.warning(f"⚠ Redis connection failed: {e}")
logger.warning("Falling back to in-memory cache only")
self._connected = False
async def disconnect(self):
"""Disconnect from Redis"""
if self.client:
await self.client.close()
logger.info("Redis connection closed")
async def get(self, key: str) -> Optional[Any]:
"""Get value from Redis cache"""
if not self._connected or not self.client:
return None
try:
value = await self.client.get(key)
if value:
logger.debug(f"Redis cache HIT: {key}")
return json.loads(value)
logger.debug(f"Redis cache MISS: {key}")
return None
except Exception as e:
logger.error(f"Redis get error for key '{key}': {e}")
return None
async def set(
self,
key: str,
value: Any,
ttl_seconds: Optional[int] = None,
) -> bool:
"""Set value in Redis cache with optional TTL"""
if not self._connected or not self.client:
return False
try:
serialized = json.dumps(value, default=str)
if ttl_seconds:
await self.client.setex(key, ttl_seconds, serialized)
else:
await self.client.set(key, serialized)
logger.debug(f"Redis cache SET: {key} (TTL: {ttl_seconds}s)")
return True
except Exception as e:
logger.error(f"Redis set error for key '{key}': {e}")
return False
async def delete(self, key: str) -> bool:
"""Delete key from Redis cache"""
if not self._connected or not self.client:
return False
try:
await self.client.delete(key)
logger.debug(f"Redis cache DELETE: {key}")
return True
except Exception as e:
logger.error(f"Redis delete error for key '{key}': {e}")
return False
async def exists(self, key: str) -> bool:
"""Check if key exists in Redis cache"""
if not self._connected or not self.client:
return False
try:
result = await self.client.exists(key)
return result > 0
except Exception as e:
logger.error(f"Redis exists error for key '{key}': {e}")
return False
async def clear_pattern(self, pattern: str) -> int:
"""Clear all keys matching pattern"""
if not self._connected or not self.client:
return 0
try:
keys = []
async for key in self.client.scan_iter(match=pattern):
keys.append(key)
if keys:
deleted = await self.client.delete(*keys)
logger.info(f"Cleared {deleted} keys matching pattern '{pattern}'")
return deleted
return 0
except Exception as e:
logger.error(f"Redis clear_pattern error for pattern '{pattern}': {e}")
return 0
async def get_stats(self) -> dict:
"""Get Redis statistics"""
if not self._connected or not self.client:
return {"connected": False}
try:
info = await self.client.info()
return {
"connected": True,
"used_memory_human": info.get("used_memory_human"),
"connected_clients": info.get("connected_clients"),
"total_commands_processed": info.get("total_commands_processed"),
"keyspace_hits": info.get("keyspace_hits"),
"keyspace_misses": info.get("keyspace_misses"),
}
except Exception as e:
logger.error(f"Redis get_stats error: {e}")
return {"connected": False, "error": str(e)}
# Singleton instance
redis_cache = RedisCache()
# Helper functions for common cache operations
def make_cache_key(prefix: str, *args) -> str:
"""Create standardized cache key"""
parts = [str(arg) for arg in args if arg is not None]
return f"{prefix}:{':'.join(parts)}"
def get_ttl_seconds(cache_type: str) -> int:
"""Get TTL in seconds based on cache type"""
ttl_map = {
"current_positions": 3600, # 1 hour
"historical_positions": 86400 * 7, # 7 days
"static_data": 86400 * 30, # 30 days
"nasa_api_response": 86400 * 3, # 3 days (from settings)
}
return ttl_map.get(cache_type, 3600) # Default 1 hour
async def cache_nasa_response(
body_id: str,
start_time: Optional[datetime],
end_time: Optional[datetime],
step: str,
data: Any,
) -> bool:
"""Cache NASA Horizons API response"""
# Create cache key
start_str = start_time.isoformat() if start_time else "now"
end_str = end_time.isoformat() if end_time else "now"
cache_key = make_cache_key("nasa", body_id, start_str, end_str, step)
# Cache in Redis
ttl = get_ttl_seconds("nasa_api_response")
return await redis_cache.set(cache_key, data, ttl)
async def get_cached_nasa_response(
body_id: str,
start_time: Optional[datetime],
end_time: Optional[datetime],
step: str,
) -> Optional[Any]:
"""Get cached NASA Horizons API response"""
start_str = start_time.isoformat() if start_time else "now"
end_str = end_time.isoformat() if end_time else "now"
cache_key = make_cache_key("nasa", body_id, start_str, end_str, step)
return await redis_cache.get(cache_key)

View File

@ -0,0 +1,210 @@
"""
System Settings Database Service
"""
from sqlalchemy import select, update, delete
from sqlalchemy.ext.asyncio import AsyncSession
from typing import Optional, List, Dict, Any
import json
import logging
from app.models.db import SystemSettings
logger = logging.getLogger(__name__)
class SystemSettingsService:
"""Service for managing system settings"""
async def get_all_settings(
self,
session: AsyncSession,
category: Optional[str] = None,
is_public: Optional[bool] = None
) -> List[SystemSettings]:
"""Get all settings, optionally filtered by category or public status"""
query = select(SystemSettings)
if category:
query = query.where(SystemSettings.category == category)
if is_public is not None:
query = query.where(SystemSettings.is_public == is_public)
result = await session.execute(query)
return result.scalars().all()
async def get_setting(
self,
key: str,
session: AsyncSession
) -> Optional[SystemSettings]:
"""Get a setting by key"""
result = await session.execute(
select(SystemSettings).where(SystemSettings.key == key)
)
return result.scalar_one_or_none()
async def get_setting_value(
self,
key: str,
session: AsyncSession,
default: Any = None
) -> Any:
"""Get setting value with type conversion"""
setting = await self.get_setting(key, session)
if not setting:
return default
# Convert value based on type
try:
if setting.value_type == "int":
return int(setting.value)
elif setting.value_type == "float":
return float(setting.value)
elif setting.value_type == "bool":
return setting.value.lower() in ("true", "1", "yes")
elif setting.value_type == "json":
return json.loads(setting.value)
else: # string
return setting.value
except Exception as e:
logger.error(f"Error converting setting {key}: {e}")
return default
async def create_setting(
self,
data: Dict[str, Any],
session: AsyncSession
) -> SystemSettings:
"""Create a new setting"""
# Convert value to string for storage
value = data.get("value")
value_type = data.get("value_type", "string")
if value_type == "json" and not isinstance(value, str):
value = json.dumps(value)
else:
value = str(value)
new_setting = SystemSettings(
key=data["key"],
value=value,
value_type=value_type,
category=data.get("category", "general"),
label=data["label"],
description=data.get("description"),
is_public=data.get("is_public", False)
)
session.add(new_setting)
await session.flush()
await session.refresh(new_setting)
return new_setting
async def update_setting(
self,
key: str,
data: Dict[str, Any],
session: AsyncSession
) -> Optional[SystemSettings]:
"""Update a setting"""
setting = await self.get_setting(key, session)
if not setting:
return None
# Convert value to string if needed
if "value" in data:
value = data["value"]
value_type = data.get("value_type", setting.value_type)
if value_type == "json" and not isinstance(value, str):
data["value"] = json.dumps(value)
else:
data["value"] = str(value)
for key, value in data.items():
if hasattr(setting, key) and value is not None:
setattr(setting, key, value)
await session.flush()
await session.refresh(setting)
return setting
async def delete_setting(
self,
key: str,
session: AsyncSession
) -> bool:
"""Delete a setting"""
result = await session.execute(
delete(SystemSettings).where(SystemSettings.key == key)
)
return result.rowcount > 0
async def initialize_default_settings(self, session: AsyncSession):
"""Initialize default system settings if they don't exist"""
defaults = [
{
"key": "timeline_interval_days",
"value": "30",
"value_type": "int",
"category": "visualization",
"label": "时间轴播放间隔(天)",
"description": "星图时间轴播放时每次跳转的天数间隔",
"is_public": True
},
{
"key": "current_cache_ttl_hours",
"value": "1",
"value_type": "int",
"category": "cache",
"label": "当前位置缓存时间(小时)",
"description": "当前位置数据在缓存中保存的时间",
"is_public": False
},
{
"key": "historical_cache_ttl_days",
"value": "7",
"value_type": "int",
"category": "cache",
"label": "历史位置缓存时间(天)",
"description": "历史位置数据在缓存中保存的时间",
"is_public": False
},
{
"key": "page_size",
"value": "10",
"value_type": "int",
"category": "ui",
"label": "每页显示数量",
"description": "管理页面默认每页显示的条数",
"is_public": True
},
{
"key": "nasa_api_timeout",
"value": "30",
"value_type": "int",
"category": "api",
"label": "NASA API超时时间",
"description": "查询NASA Horizons API的超时时间",
"is_public": False
},
{
"key": "orbit_points",
"value": "200",
"value_type": "int",
"category": "visualization",
"label": "轨道线点数",
"description": "生成轨道线时使用的点数,越多越平滑但性能越低",
"is_public": True
},
]
for default in defaults:
existing = await self.get_setting(default["key"], session)
if not existing:
await self.create_setting(default, session)
logger.info(f"Created default setting: {default['key']}")
# Singleton instance
system_settings_service = SystemSettingsService()

View File

@ -0,0 +1,141 @@
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, update
from typing import Optional, Dict, Any
from datetime import datetime
import logging
import asyncio
from app.models.db import Task
from app.services.redis_cache import redis_cache
logger = logging.getLogger(__name__)
class TaskService:
def __init__(self):
self.redis_prefix = "task:progress:"
async def create_task(
self,
db: AsyncSession,
task_type: str,
description: str = None,
params: Dict[str, Any] = None,
created_by: int = None
) -> Task:
"""Create a new task record"""
task = Task(
task_type=task_type,
description=description,
params=params,
status="pending",
created_by=created_by,
progress=0
)
db.add(task)
await db.commit()
await db.refresh(task)
# Init Redis status
await self._update_redis(task.id, 0, "pending")
return task
async def update_progress(
self,
db: AsyncSession,
task_id: int,
progress: int,
status: str = "running"
):
"""Update task progress in DB and Redis"""
# Update DB
stmt = (
update(Task)
.where(Task.id == task_id)
.values(
progress=progress,
status=status,
started_at=datetime.utcnow() if status == "running" and progress == 0 else None
)
)
await db.execute(stmt)
await db.commit()
# Update Redis for fast polling
await self._update_redis(task_id, progress, status)
async def complete_task(
self,
db: AsyncSession,
task_id: int,
result: Dict[str, Any] = None
):
"""Mark task as completed"""
stmt = (
update(Task)
.where(Task.id == task_id)
.values(
status="completed",
progress=100,
completed_at=datetime.utcnow(),
result=result
)
)
await db.execute(stmt)
await db.commit()
await self._update_redis(task_id, 100, "completed")
async def fail_task(
self,
db: AsyncSession,
task_id: int,
error_message: str
):
"""Mark task as failed"""
stmt = (
update(Task)
.where(Task.id == task_id)
.values(
status="failed",
completed_at=datetime.utcnow(),
error_message=error_message
)
)
await db.execute(stmt)
await db.commit()
await self._update_redis(task_id, -1, "failed", error=error_message)
async def get_task(self, db: AsyncSession, task_id: int) -> Optional[Task]:
"""Get task from DB"""
result = await db.execute(select(Task).where(Task.id == task_id))
return result.scalar_one_or_none()
async def _update_redis(
self,
task_id: int,
progress: int,
status: str,
error: str = None
):
"""Update transient state in Redis"""
key = f"{self.redis_prefix}{task_id}"
data = {
"id": task_id,
"progress": progress,
"status": status,
"updated_at": datetime.utcnow().isoformat()
}
if error:
data["error"] = error
# Set TTL for 1 hour
await redis_cache.set(key, data, ttl_seconds=3600)
async def get_task_progress_from_redis(self, task_id: int) -> Optional[Dict]:
"""Get real-time progress from Redis"""
key = f"{self.redis_prefix}{task_id}"
return await redis_cache.get(key)
task_service = TaskService()

View File

@ -0,0 +1,136 @@
"""
Token management service using Redis
"""
from typing import Optional
from datetime import timedelta
from app.services.redis_cache import redis_cache
from app.config import settings
import json
class TokenService:
"""Token management with Redis"""
def __init__(self):
self.prefix = "token:"
self.blacklist_prefix = "token:blacklist:"
self.user_tokens_prefix = "user:tokens:"
async def save_token(self, token: str, user_id: int, username: str) -> None:
"""
Save token to Redis with user info
Args:
token: JWT access token
user_id: User ID
username: Username
"""
# Save token with user info
token_data = {
"user_id": user_id,
"username": username
}
# Set token in Redis with TTL (24 hours)
ttl_seconds = settings.jwt_access_token_expire_minutes * 60
await redis_cache.set(
f"{self.prefix}{token}",
json.dumps(token_data),
ttl_seconds=ttl_seconds
)
# Track user's active tokens (for multi-device support)
user_tokens_key = f"{self.user_tokens_prefix}{user_id}"
# Add token to user's token set
if redis_cache.client:
await redis_cache.client.sadd(user_tokens_key, token)
await redis_cache.client.expire(user_tokens_key, ttl_seconds)
async def get_token_data(self, token: str) -> Optional[dict]:
"""
Get token data from Redis
Args:
token: JWT access token
Returns:
Token data dict or None if not found/expired
"""
# Check if token is blacklisted
is_blacklisted = await redis_cache.exists(f"{self.blacklist_prefix}{token}")
if is_blacklisted:
return None
# Get token data
data = await redis_cache.get(f"{self.prefix}{token}")
if data:
return json.loads(data)
return None
async def revoke_token(self, token: str) -> None:
"""
Revoke a token (logout)
Args:
token: JWT access token
"""
# Get token data first to know user_id
token_data = await self.get_token_data(token)
# Add to blacklist
ttl_seconds = settings.jwt_access_token_expire_minutes * 60
await redis_cache.set(
f"{self.blacklist_prefix}{token}",
"1",
expire=ttl_seconds
)
# Delete from active tokens
await redis_cache.delete(f"{self.prefix}{token}")
# Remove from user's token set
if token_data and redis_cache.client:
user_id = token_data.get("user_id")
if user_id:
await redis_cache.client.srem(
f"{self.user_tokens_prefix}{user_id}",
token
)
async def revoke_all_user_tokens(self, user_id: int) -> None:
"""
Revoke all tokens for a user (logout from all devices)
Args:
user_id: User ID
"""
if not redis_cache.client:
return
# Get all user's tokens
user_tokens_key = f"{self.user_tokens_prefix}{user_id}"
tokens = await redis_cache.client.smembers(user_tokens_key)
# Revoke each token
for token in tokens:
await self.revoke_token(token.decode() if isinstance(token, bytes) else token)
# Clear user's token set
await redis_cache.delete(user_tokens_key)
async def is_token_valid(self, token: str) -> bool:
"""
Check if token is valid (not blacklisted and exists in Redis)
Args:
token: JWT access token
Returns:
True if valid, False otherwise
"""
token_data = await self.get_token_data(token)
return token_data is not None
# Global token service instance
token_service = TokenService()

30
requirements.txt 100644
View File

@ -0,0 +1,30 @@
fastapi==0.104.1
uvicorn[standard]==0.24.0
astroquery==0.4.7
astropy==6.0.0
pydantic==2.5.0
pydantic-settings==2.1.0
python-dotenv==1.0.0
httpx==0.25.2
# Database
sqlalchemy==2.0.23
asyncpg==0.29.0
alembic==1.13.0
greenlet==3.0.1
# Redis
redis==5.0.1
# Authentication
bcrypt==5.0.0
python-jose[cryptography]==3.5.0
passlib[bcrypt]==1.7.4
# File handling
python-multipart==0.0.6
aiofiles==23.2.1
Pillow==10.1.0
# Date handling
python-dateutil==2.8.2

View File

@ -0,0 +1,4 @@
-- Add danmaku_ttl setting (default 24 hours = 86400 seconds)
INSERT INTO system_settings (key, value, value_type, category, label, description, is_public)
SELECT 'danmaku_ttl', '86400', 'int', 'platform', '弹幕保留时间', '用户发送的弹幕在系统中保留的时间(秒)', true
WHERE NOT EXISTS (SELECT 1 FROM system_settings WHERE key = 'danmaku_ttl');

View File

@ -0,0 +1,114 @@
-- This script adds a new top-level menu "Platform Management"
-- with two sub-menus "User Management" and "Platform Parameters Management".
-- These menus will be assigned to the 'admin' role.
-- Start Transaction for atomicity
BEGIN;
-- 1. Find the ID of the 'admin' role
-- Assuming 'admin' role name exists and is unique.
DO $$
DECLARE
admin_role_id INTEGER;
platform_management_menu_id INTEGER;
user_management_menu_id INTEGER;
platform_parameters_menu_id INTEGER;
BEGIN
SELECT id INTO admin_role_id FROM roles WHERE name = 'admin';
IF admin_role_id IS NULL THEN
RAISE EXCEPTION 'Admin role not found. Please ensure the admin role exists.';
END IF;
-- 2. Insert the top-level menu: "Platform Management"
-- Check if it already exists to prevent duplicates on re-run
SELECT id INTO platform_management_menu_id FROM menus WHERE name = 'platform_management' AND parent_id IS NULL;
IF platform_management_menu_id IS NULL THEN
INSERT INTO menus (name, title, icon, path, component, sort_order, is_active, description, created_at, updated_at)
VALUES (
'platform_management',
'平台管理',
'settings', -- Using a generic settings icon for platform management
NULL, -- It's a parent menu, no direct path
NULL,
3, -- Assuming sort_order 1 & 2 are for Dashboard & Data Management
TRUE,
'管理用户和系统参数',
NOW(),
NOW()
) RETURNING id INTO platform_management_menu_id;
RAISE NOTICE 'Inserted Platform Management menu with ID: %', platform_management_menu_id;
-- Assign to admin role
INSERT INTO role_menus (role_id, menu_id, created_at)
VALUES (admin_role_id, platform_management_menu_id, NOW());
RAISE NOTICE 'Assigned Platform Management to admin role.';
ELSE
RAISE NOTICE 'Platform Management menu already exists with ID: %', platform_management_menu_id;
END IF;
-- 3. Insert sub-menu: "User Management"
-- Check if it already exists
SELECT id INTO user_management_menu_id FROM menus WHERE name = 'user_management' AND parent_id = platform_management_menu_id;
IF user_management_menu_id IS NULL THEN
INSERT INTO menus (parent_id, name, title, icon, path, component, sort_order, is_active, description, created_at, updated_at)
VALUES (
platform_management_menu_id,
'user_management',
'用户管理',
'users', -- Icon for user management
'/admin/users', -- Admin users page path
'admin/Users', -- React component path
1,
TRUE,
'管理系统用户账号',
NOW(),
NOW()
) RETURNING id INTO user_management_menu_id;
RAISE NOTICE 'Inserted User Management menu with ID: %', user_management_menu_id;
-- Assign to admin role
INSERT INTO role_menus (role_id, menu_id, created_at)
VALUES (admin_role_id, user_management_menu_id, NOW());
RAISE NOTICE 'Assigned User Management to admin role.';
ELSE
RAISE NOTICE 'User Management menu already exists with ID: %', user_management_menu_id;
END IF;
-- 4. Insert sub-menu: "Platform Parameters Management"
-- Check if it already exists
SELECT id INTO platform_parameters_menu_id FROM menus WHERE name = 'platform_parameters_management' AND parent_id = platform_management_menu_id;
IF platform_parameters_menu_id IS NULL THEN
INSERT INTO menus (parent_id, name, title, icon, path, component, sort_order, is_active, description, created_at, updated_at)
VALUES (
platform_management_menu_id,
'platform_parameters_management',
'平台参数管理',
'sliders', -- Icon for parameters/settings
'/admin/settings', -- Admin settings page path
'admin/Settings', -- React component path
2,
TRUE,
'管理系统通用配置参数',
NOW(),
NOW()
) RETURNING id INTO platform_parameters_menu_id;
RAISE NOTICE 'Inserted Platform Parameters Management menu with ID: %', platform_parameters_menu_id;
-- Assign to admin role
INSERT INTO role_menus (role_id, menu_id, created_at)
VALUES (admin_role_id, platform_parameters_menu_id, NOW());
RAISE NOTICE 'Assigned Platform Parameters Management to admin role.';
ELSE
RAISE NOTICE 'Platform Parameters Management menu already exists with ID: %', platform_parameters_menu_id;
END IF;
END $$;
-- Commit the transaction
COMMIT;

View File

@ -0,0 +1,77 @@
"""
Add Pluto to celestial bodies database
"""
import asyncio
from sqlalchemy.dialects.postgresql import insert as pg_insert
from app.database import get_db
from app.models.db.celestial_body import CelestialBody
from app.models.db.resource import Resource
async def add_pluto():
"""Add Pluto to the database"""
async for session in get_db():
try:
# Add Pluto as a celestial body
print("📍 Adding Pluto to celestial_bodies table...")
stmt = pg_insert(CelestialBody).values(
id="999",
name="Pluto",
name_zh="冥王星",
type="planet",
description="冥王星,曾经的第九大行星,现为矮行星"
)
stmt = stmt.on_conflict_do_update(
index_elements=['id'],
set_={
'name': "Pluto",
'name_zh': "冥王星",
'type': "planet",
'description': "冥王星,曾经的第九大行星,现为矮行星"
}
)
await session.execute(stmt)
await session.commit()
print("✅ Pluto added successfully!")
# Check if Pluto texture exists
import os
texture_path = "upload/texture/2k_pluto.jpg"
if os.path.exists(texture_path):
print(f"\n📸 Found Pluto texture: {texture_path}")
file_size = os.path.getsize(texture_path)
# Add texture resource
print("📦 Adding Pluto texture to resources table...")
stmt = pg_insert(Resource).values(
body_id="999",
resource_type="texture",
file_path="texture/2k_pluto.jpg",
file_size=file_size,
mime_type="image/jpeg",
extra_data=None
)
stmt = stmt.on_conflict_do_update(
index_elements=['body_id', 'resource_type', 'file_path'],
set_={
'file_size': file_size,
'mime_type': "image/jpeg",
}
)
await session.execute(stmt)
await session.commit()
print(f"✅ Pluto texture resource added ({file_size} bytes)")
else:
print(f"\n⚠️ Pluto texture not found at {texture_path}")
print(" Please add a 2k_pluto.jpg file to upload/texture/ directory")
except Exception as e:
print(f"❌ Error adding Pluto: {e}")
await session.rollback()
raise
finally:
break
if __name__ == "__main__":
asyncio.run(add_pluto())

View File

@ -0,0 +1,50 @@
-- Add System Settings menu to platform management
-- This should be executed after the system is running
-- Insert Platform Settings menu under Platform Management (assuming parent_id=4 for Platform Management)
INSERT INTO menus (name, title, path, icon, parent_id, sort_order, is_active, created_at, updated_at)
VALUES (
'system_settings',
'系统参数',
'/admin/system-settings',
'settings',
(SELECT id FROM menus WHERE name = 'platform_management'),
1,
true,
NOW(),
NOW()
)
ON CONFLICT (name) DO UPDATE
SET
title = EXCLUDED.title,
path = EXCLUDED.path,
icon = EXCLUDED.icon,
parent_id = EXCLUDED.parent_id,
sort_order = EXCLUDED.sort_order,
updated_at = NOW();
-- Grant access to admin role
INSERT INTO role_menus (role_id, menu_id)
SELECT
r.id,
m.id
FROM
roles r,
menus m
WHERE
r.name = 'admin'
AND m.name = 'system_settings'
ON CONFLICT (role_id, menu_id) DO NOTHING;
-- Verify the menu was added
SELECT
m.id,
m.name,
m.title,
m.path,
m.icon,
parent.title as parent_menu,
m.sort_order
FROM menus m
LEFT JOIN menus parent ON m.parent_id = parent.id
WHERE m.name = 'system_settings';

View File

@ -0,0 +1,13 @@
-- Insert Tasks menu if it doesn't exist
INSERT INTO menus (name, title, icon, path, component, parent_id, sort_order, is_active)
SELECT 'system_tasks', 'System Tasks', 'schedule', '/admin/tasks', 'admin/Tasks', m.id, 30, true
FROM menus m
WHERE m.name = 'platform_management'
AND NOT EXISTS (SELECT 1 FROM menus WHERE name = 'system_tasks' AND parent_id = m.id);
-- Assign to admin role
INSERT INTO role_menus (role_id, menu_id, created_at)
SELECT r.id, m.id, NOW()
FROM roles r, menus m
WHERE r.name = 'admin' AND m.name = 'system_tasks'
AND NOT EXISTS (SELECT 1 FROM role_menus rm WHERE rm.role_id = r.id AND rm.menu_id = m.id);

View File

@ -0,0 +1,19 @@
-- Create tasks table for background job management
CREATE TABLE IF NOT EXISTS tasks (
id SERIAL PRIMARY KEY,
task_type VARCHAR(50) NOT NULL, -- e.g., 'nasa_download'
status VARCHAR(20) NOT NULL DEFAULT 'pending', -- pending, running, completed, failed, cancelled
description VARCHAR(255),
params JSONB, -- Store input parameters (body_ids, dates)
result JSONB, -- Store output results
progress INTEGER DEFAULT 0, -- 0 to 100
error_message TEXT,
created_by INTEGER, -- User ID who initiated
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
started_at TIMESTAMP WITH TIME ZONE,
completed_at TIMESTAMP WITH TIME ZONE
);
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_created_at ON tasks(created_at DESC);

View File

@ -0,0 +1,27 @@
-- 为 positions 表添加唯一约束
-- 这样 ON CONFLICT 才能正常工作
-- 1. 先删除现有的重复数据(如果有)
WITH duplicates AS (
SELECT id,
ROW_NUMBER() OVER (
PARTITION BY body_id, time
ORDER BY created_at DESC
) as rn
FROM positions
)
DELETE FROM positions
WHERE id IN (
SELECT id FROM duplicates WHERE rn > 1
);
-- 2. 添加唯一约束
ALTER TABLE positions
ADD CONSTRAINT positions_body_time_unique
UNIQUE (body_id, time);
-- 3. 验证约束已创建
SELECT constraint_name, constraint_type
FROM information_schema.table_constraints
WHERE table_name = 'positions'
AND constraint_type = 'UNIQUE';

View File

@ -0,0 +1,214 @@
#!/usr/bin/env python3
"""
配置验证脚本 - 检查 PostgreSQL Redis 配置是否正确
Usage:
python scripts/check_config.py
"""
import asyncio
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from app.config import settings
import asyncpg
import redis.asyncio as redis
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def check_postgresql():
"""检查 PostgreSQL 连接"""
print("\n" + "=" * 60)
print("检查 PostgreSQL 配置")
print("=" * 60)
try:
# 连接参数
print(f"主机: {settings.database_host}")
print(f"端口: {settings.database_port}")
print(f"数据库: {settings.database_name}")
print(f"用户: {settings.database_user}")
print(f"连接池大小: {settings.database_pool_size}")
# 尝试连接
conn = await asyncpg.connect(
host=settings.database_host,
port=settings.database_port,
user=settings.database_user,
password=settings.database_password,
database=settings.database_name,
)
# 查询版本
version = await conn.fetchval("SELECT version()")
print(f"\n✓ PostgreSQL 连接成功")
print(f"版本: {version.split(',')[0]}")
# 查询数据库大小
db_size = await conn.fetchval(
"SELECT pg_size_pretty(pg_database_size($1))",
settings.database_name
)
print(f"数据库大小: {db_size}")
# 查询表数量
table_count = await conn.fetchval("""
SELECT COUNT(*)
FROM information_schema.tables
WHERE table_schema = 'public'
""")
print(f"数据表数量: {table_count}")
await conn.close()
return True
except Exception as e:
print(f"\n✗ PostgreSQL 连接失败: {e}")
print("\n请检查:")
print(" 1. PostgreSQL 是否正在运行")
print(" 2. 数据库是否已创建 (运行: python scripts/create_db.py)")
print(" 3. .env 文件中的账号密码是否正确")
return False
async def check_redis():
"""检查 Redis 连接"""
print("\n" + "=" * 60)
print("检查 Redis 配置")
print("=" * 60)
try:
# 连接参数
print(f"主机: {settings.redis_host}")
print(f"端口: {settings.redis_port}")
print(f"数据库: {settings.redis_db}")
print(f"密码: {'(无)' if not settings.redis_password else '******'}")
print(f"最大连接数: {settings.redis_max_connections}")
# 尝试连接
client = redis.from_url(
settings.redis_url,
encoding="utf-8",
decode_responses=True,
)
# 测试连接
await client.ping()
print(f"\n✓ Redis 连接成功")
# 获取 Redis 信息
info = await client.info()
print(f"版本: {info.get('redis_version')}")
print(f"使用内存: {info.get('used_memory_human')}")
print(f"已连接客户端: {info.get('connected_clients')}")
print(f"运行天数: {info.get('uptime_in_days')}")
await client.close()
return True
except Exception as e:
print(f"\n⚠ Redis 连接失败: {e}")
print("\n说明:")
print(" Redis 是可选的缓存服务")
print(" 如果 Redis 不可用,应用会自动降级为内存缓存")
print(" 不影响核心功能,但会失去跨进程缓存能力")
print("\n如需启用 Redis:")
print(" - macOS: brew install redis && brew services start redis")
print(" - Ubuntu: sudo apt install redis && sudo systemctl start redis")
return False
def check_env_file():
"""检查 .env 文件"""
print("\n" + "=" * 60)
print("检查配置文件")
print("=" * 60)
env_path = Path(__file__).parent.parent / ".env"
if env_path.exists():
print(f"✓ .env 文件存在: {env_path}")
print(f"文件大小: {env_path.stat().st_size} bytes")
return True
else:
print(f"✗ .env 文件不存在")
print(f"请从 .env.example 创建: cp .env.example .env")
return False
def check_upload_dir():
"""检查上传目录"""
print("\n" + "=" * 60)
print("检查上传目录")
print("=" * 60)
upload_dir = Path(__file__).parent.parent / settings.upload_dir
if upload_dir.exists():
print(f"✓ 上传目录存在: {upload_dir}")
return True
else:
print(f"⚠ 上传目录不存在: {upload_dir}")
print(f"自动创建...")
upload_dir.mkdir(parents=True, exist_ok=True)
print(f"✓ 上传目录创建成功")
return True
async def main():
"""主函数"""
print("\n" + "=" * 60)
print(" Cosmo 配置验证工具")
print("=" * 60)
results = []
# 1. 检查配置文件
results.append(("配置文件", check_env_file()))
# 2. 检查上传目录
results.append(("上传目录", check_upload_dir()))
# 3. 检查 PostgreSQL
results.append(("PostgreSQL", await check_postgresql()))
# 4. 检查 Redis
results.append(("Redis", await check_redis()))
# 总结
print("\n" + "=" * 60)
print(" 配置检查总结")
print("=" * 60)
for name, status in results:
status_str = "" if status else ""
print(f"{status_str} {name}")
# 判断是否所有必需服务都正常
required_services = [results[0], results[1], results[2]] # 配置文件、上传目录、PostgreSQL
all_required_ok = all(status for _, status in required_services)
if all_required_ok:
print("\n" + "=" * 60)
print(" ✓ 所有必需服务配置正确!")
print("=" * 60)
print("\n可以启动服务:")
print(" python -m uvicorn app.main:app --reload")
print("\n或者:")
print(" python app/main.py")
return 0
else:
print("\n" + "=" * 60)
print(" ✗ 部分必需服务配置有问题")
print("=" * 60)
print("\n请先解决上述问题,然后重新运行此脚本")
return 1
if __name__ == "__main__":
exit_code = asyncio.run(main())
sys.exit(exit_code)

View File

@ -0,0 +1,63 @@
"""
Check probe data in database
"""
import asyncio
import sys
import os
# Add backend to path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from sqlalchemy import create_engine, text
from app.config import settings
def check_probes():
"""Check probe data directly with SQL"""
engine = create_engine(settings.database_url.replace('+asyncpg', ''))
with engine.connect() as conn:
# Check all celestial bodies
result = conn.execute(text("""
SELECT
cb.id,
cb.name,
cb.name_zh,
cb.type,
cb.is_active,
COUNT(p.id) as position_count
FROM celestial_bodies cb
LEFT JOIN positions p ON cb.id = p.body_id
GROUP BY cb.id, cb.name, cb.name_zh, cb.type, cb.is_active
ORDER BY cb.type, cb.name
"""))
print("All Celestial Bodies:")
print("=" * 100)
for row in result:
print(f"ID: {row.id:15s} | Name: {row.name:20s} | Type: {row.type:15s} | Active: {str(row.is_active):5s} | Positions: {row.position_count}")
print("\n" + "=" * 100)
print("\nProbes only:")
print("=" * 100)
result = conn.execute(text("""
SELECT
cb.id,
cb.name,
cb.name_zh,
cb.is_active,
COUNT(p.id) as position_count
FROM celestial_bodies cb
LEFT JOIN positions p ON cb.id = p.body_id
WHERE cb.type = 'probe'
GROUP BY cb.id, cb.name, cb.name_zh, cb.is_active
ORDER BY cb.name
"""))
for row in result:
print(f"ID: {row.id:15s} | Name: {row.name:20s} ({row.name_zh}) | Active: {str(row.is_active):5s} | Positions: {row.position_count}")
if __name__ == "__main__":
check_probes()

View File

@ -0,0 +1,42 @@
-- 清理数据库重复数据
-- 1. 清理 positions 表的重复数据
-- 保留每个 (body_id, time) 组合的最新一条记录
WITH duplicates AS (
SELECT id,
ROW_NUMBER() OVER (
PARTITION BY body_id, time
ORDER BY created_at DESC
) as rn
FROM positions
)
DELETE FROM positions
WHERE id IN (
SELECT id FROM duplicates WHERE rn > 1
);
-- 2. 清理 nasa_cache 表的重复数据
-- 保留每个 cache_key 的最新一条记录
WITH duplicates AS (
SELECT id,
ROW_NUMBER() OVER (
PARTITION BY cache_key
ORDER BY created_at DESC
) as rn
FROM nasa_cache
)
DELETE FROM nasa_cache
WHERE id IN (
SELECT id FROM duplicates WHERE rn > 1
);
-- 3. 验证清理结果
SELECT 'Positions duplicates check' as check_name,
COUNT(*) - COUNT(DISTINCT (body_id, time)) as duplicate_count
FROM positions
UNION ALL
SELECT 'NASA cache duplicates check' as check_name,
COUNT(*) - COUNT(DISTINCT cache_key) as duplicate_count
FROM nasa_cache;

View File

@ -0,0 +1,59 @@
#!/usr/bin/env python3
"""
Create PostgreSQL database for Cosmo
Usage:
python scripts/create_db.py
"""
import asyncio
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from app.config import settings
import asyncpg
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def main():
"""Create database if it doesn't exist"""
# Connect to postgres database (default database)
try:
conn = await asyncpg.connect(
host=settings.database_host,
port=settings.database_port,
user=settings.database_user,
password=settings.database_password,
database="postgres", # Connect to default database
)
# Check if database exists
exists = await conn.fetchval(
"SELECT 1 FROM pg_database WHERE datname = $1",
settings.database_name
)
if exists:
logger.info(f"✓ Database '{settings.database_name}' already exists")
else:
# Create database
await conn.execute(f'CREATE DATABASE {settings.database_name}')
logger.info(f"✓ Database '{settings.database_name}' created successfully")
await conn.close()
except Exception as e:
logger.error(f"✗ Failed to create database: {e}")
logger.error("\nPlease ensure:")
logger.error(" 1. PostgreSQL is running")
logger.error(" 2. Database credentials in .env are correct")
logger.error(f" 3. User '{settings.database_user}' has permission to create databases")
sys.exit(1)
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,88 @@
-- ============================================================
-- Create orbits table for storing precomputed orbital paths
-- ============================================================
-- Purpose: Store complete orbital trajectories for planets and dwarf planets
-- This eliminates the need to query NASA Horizons API for orbit visualization
--
-- Usage:
-- psql -U your_user -d cosmo < create_orbits_table.sql
-- OR execute in your SQL client/tool
--
-- Version: 1.0
-- Created: 2025-11-29
-- ============================================================
-- Create orbits table
CREATE TABLE IF NOT EXISTS orbits (
id SERIAL PRIMARY KEY,
body_id TEXT NOT NULL,
points JSONB NOT NULL, -- Array of orbital points: [{"x": 1.0, "y": 0.0, "z": 0.0}, ...]
num_points INTEGER NOT NULL, -- Number of points in the orbit
period_days FLOAT, -- Orbital period in days
color VARCHAR(20), -- Orbit line color (hex format: #RRGGBB)
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT orbits_body_id_unique UNIQUE(body_id),
CONSTRAINT orbits_body_id_fkey FOREIGN KEY (body_id) REFERENCES celestial_bodies(id) ON DELETE CASCADE
);
-- Create index on body_id for fast lookups
CREATE INDEX IF NOT EXISTS idx_orbits_body_id ON orbits(body_id);
-- Create index on updated_at for tracking data freshness
CREATE INDEX IF NOT EXISTS idx_orbits_updated_at ON orbits(updated_at);
-- Add comments to table
COMMENT ON TABLE orbits IS 'Precomputed orbital paths for celestial bodies';
COMMENT ON COLUMN orbits.body_id IS 'Foreign key to celestial_bodies.id';
COMMENT ON COLUMN orbits.points IS 'Array of 3D points (x,y,z in AU) defining the orbital path';
COMMENT ON COLUMN orbits.num_points IS 'Total number of points in the orbit';
COMMENT ON COLUMN orbits.period_days IS 'Orbital period in Earth days';
COMMENT ON COLUMN orbits.color IS 'Hex color code for rendering the orbit line';
-- ============================================================
-- Sample data for testing (optional - can be removed)
-- ============================================================
-- Uncomment below to insert sample orbit for Earth
/*
INSERT INTO orbits (body_id, points, num_points, period_days, color)
VALUES (
'399', -- Earth
'[
{"x": 1.0, "y": 0.0, "z": 0.0},
{"x": 0.707, "y": 0.707, "z": 0.0},
{"x": 0.0, "y": 1.0, "z": 0.0},
{"x": -0.707, "y": 0.707, "z": 0.0},
{"x": -1.0, "y": 0.0, "z": 0.0},
{"x": -0.707, "y": -0.707, "z": 0.0},
{"x": 0.0, "y": -1.0, "z": 0.0},
{"x": 0.707, "y": -0.707, "z": 0.0}
]'::jsonb,
8,
365.25,
'#4A90E2'
)
ON CONFLICT (body_id) DO UPDATE
SET
points = EXCLUDED.points,
num_points = EXCLUDED.num_points,
period_days = EXCLUDED.period_days,
color = EXCLUDED.color,
updated_at = NOW();
*/
-- ============================================================
-- Verification queries (execute separately if needed)
-- ============================================================
-- Check if table was created successfully
-- SELECT schemaname, tablename, tableowner FROM pg_tables WHERE tablename = 'orbits';
-- Check indexes
-- SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'orbits';
-- Show table structure
-- SELECT column_name, data_type, is_nullable, column_default
-- FROM information_schema.columns
-- WHERE table_name = 'orbits'
-- ORDER BY ordinal_position;

View File

@ -0,0 +1,200 @@
#!/usr/bin/env python3
"""
Fetch celestial body positions from NASA Horizons API and cache them
This script:
1. Fetches position data for all celestial bodies
2. Caches data in Redis (L2 cache)
3. Saves data to PostgreSQL (L3 cache/persistent storage)
Usage:
python scripts/fetch_and_cache.py [--days DAYS]
Options:
--days DAYS Number of days to fetch (default: 7)
"""
import asyncio
import sys
from pathlib import Path
from datetime import datetime, timedelta
import argparse
import logging
sys.path.insert(0, str(Path(__file__).parent.parent))
from app.services.horizons import horizons_service
from app.services.db_service import (
celestial_body_service,
position_service,
nasa_cache_service
)
from app.services.redis_cache import redis_cache, cache_nasa_response
from app.config import settings
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
async def fetch_and_cache_body(body_id: str, body_name: str, days: int = 7):
"""Fetch and cache position data for a single celestial body"""
logger.info(f"Fetching data for {body_name} ({body_id})...")
try:
# Calculate time range
now = datetime.utcnow()
start_time = now
end_time = now + timedelta(days=days)
step = "1d"
# Fetch positions from NASA API (synchronous call in async context)
loop = asyncio.get_event_loop()
positions = await loop.run_in_executor(
None,
horizons_service.get_body_positions,
body_id,
start_time,
end_time,
step
)
if not positions:
logger.warning(f"No positions returned for {body_name}")
return False
logger.info(f"Fetched {len(positions)} positions for {body_name}")
# Prepare data for caching
position_data = [
{
"time": pos.time,
"x": pos.x,
"y": pos.y,
"z": pos.z,
}
for pos in positions
]
# Cache in Redis (L2)
redis_cached = await cache_nasa_response(
body_id=body_id,
start_time=start_time,
end_time=end_time,
step=step,
data=position_data
)
if redis_cached:
logger.info(f"✓ Cached {body_name} data in Redis")
else:
logger.warning(f"⚠ Failed to cache {body_name} data in Redis")
# Save to PostgreSQL (L3 - persistent storage)
# Save raw NASA response for future cache hits
await nasa_cache_service.save_response(
body_id=body_id,
start_time=start_time,
end_time=end_time,
step=step,
response_data={"positions": position_data},
ttl_days=settings.cache_ttl_days
)
logger.info(f"✓ Cached {body_name} data in PostgreSQL (nasa_cache)")
# Save positions to positions table for querying
saved_count = await position_service.save_positions(
body_id=body_id,
positions=position_data,
source="nasa_horizons"
)
logger.info(f"✓ Saved {saved_count} positions for {body_name} in PostgreSQL")
return True
except Exception as e:
logger.error(f"✗ Failed to fetch/cache {body_name}: {e}")
import traceback
traceback.print_exc()
return False
async def main():
"""Fetch and cache data for all celestial bodies"""
parser = argparse.ArgumentParser(description='Fetch and cache celestial body positions')
parser.add_argument('--days', type=int, default=7, help='Number of days to fetch (default: 7)')
args = parser.parse_args()
logger.info("=" * 60)
logger.info("Fetch and Cache NASA Horizons Data")
logger.info("=" * 60)
logger.info(f"Time range: {args.days} days from now")
logger.info("=" * 60)
# Connect to Redis
await redis_cache.connect()
try:
# Get all celestial bodies from database
bodies = await celestial_body_service.get_all_bodies()
logger.info(f"\nFound {len(bodies)} celestial bodies in database")
# Filter for probes and planets (skip stars)
bodies_to_fetch = [
body for body in bodies
if body.type in ['probe', 'planet']
]
logger.info(f"Will fetch data for {len(bodies_to_fetch)} bodies (probes + planets)")
# Fetch and cache data for each body
success_count = 0
fail_count = 0
for i, body in enumerate(bodies_to_fetch, 1):
logger.info(f"\n[{i}/{len(bodies_to_fetch)}] Processing {body.name}...")
success = await fetch_and_cache_body(
body_id=body.id,
body_name=body.name,
days=args.days
)
if success:
success_count += 1
else:
fail_count += 1
# Small delay to avoid overwhelming NASA API
if i < len(bodies_to_fetch):
await asyncio.sleep(0.5)
# Summary
logger.info("\n" + "=" * 60)
logger.info("Summary")
logger.info("=" * 60)
logger.info(f"✓ Successfully cached: {success_count} bodies")
if fail_count > 0:
logger.warning(f"✗ Failed: {fail_count} bodies")
logger.info("=" * 60)
# Check cache status
redis_stats = await redis_cache.get_stats()
if redis_stats.get("connected"):
logger.info("\nRedis Cache Status:")
logger.info(f" Memory: {redis_stats.get('used_memory_human')}")
logger.info(f" Clients: {redis_stats.get('connected_clients')}")
logger.info(f" Hits: {redis_stats.get('keyspace_hits')}")
logger.info(f" Misses: {redis_stats.get('keyspace_misses')}")
except Exception as e:
logger.error(f"\n✗ Failed: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
finally:
# Disconnect from Redis
await redis_cache.disconnect()
if __name__ == "__main__":
asyncio.run(main())

79
scripts/init_db.py 100755
View File

@ -0,0 +1,79 @@
#!/usr/bin/env python3
"""
Database initialization script
Creates all tables in the PostgreSQL database.
Usage:
python scripts/init_db.py
"""
import asyncio
import sys
from pathlib import Path
# Add parent directory to path to import app modules
sys.path.insert(0, str(Path(__file__).parent.parent))
from app.database import init_db, close_db, engine
from app.config import settings
from sqlalchemy import text
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
async def main():
"""Initialize database"""
logger.info("=" * 60)
logger.info("Cosmo Database Initialization")
logger.info("=" * 60)
logger.info(f"Database URL: {settings.database_url.split('@')[1]}") # Hide password
logger.info("=" * 60)
try:
# Test database connection
logger.info("Testing database connection...")
async with engine.begin() as conn:
await conn.execute(text("SELECT 1"))
logger.info("✓ Database connection successful")
# Create all tables
logger.info("Creating database tables...")
await init_db()
logger.info("✓ All tables created successfully")
# Display created tables
async with engine.connect() as conn:
result = await conn.execute(text("""
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY table_name
"""))
tables = [row[0] for row in result]
logger.info(f"\nCreated {len(tables)} tables:")
for table in tables:
logger.info(f" - {table}")
logger.info("\n" + "=" * 60)
logger.info("Database initialization completed successfully!")
logger.info("=" * 60)
except Exception as e:
logger.error(f"\n✗ Database initialization failed: {e}")
logger.error("\nPlease ensure:")
logger.error(" 1. PostgreSQL is running")
logger.error(" 2. Database 'cosmo_db' exists")
logger.error(" 3. Database credentials in .env are correct")
sys.exit(1)
finally:
await close_db()
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,31 @@
"""
List celestial bodies from database
"""
import asyncio
from app.database import get_db
from app.models.db.celestial_body import CelestialBody
async def list_celestial_bodies():
"""List all celestial bodies"""
async for session in get_db():
try:
from sqlalchemy import select
stmt = select(CelestialBody).order_by(CelestialBody.type, CelestialBody.id)
result = await session.execute(stmt)
bodies = result.scalars().all()
print(f"\n📊 Found {len(bodies)} celestial bodies:\n")
print(f"{'ID':<20} {'Name':<25} {'Type':<10}")
print("=" * 60)
for body in bodies:
print(f"{body.id:<20} {body.name:<25} {body.type:<10}")
finally:
break
if __name__ == "__main__":
asyncio.run(list_celestial_bodies())

View File

@ -0,0 +1,184 @@
#!/usr/bin/env python3
"""
Data migration script
Migrates existing data from code/JSON files to PostgreSQL database:
1. CELESTIAL_BODIES dict celestial_bodies table
2. Frontend JSON files static_data table
Usage:
python scripts/migrate_data.py [--force | --skip-existing]
Options:
--force Overwrite existing data without prompting
--skip-existing Skip migration if data already exists
"""
import asyncio
import sys
from pathlib import Path
import json
import argparse
sys.path.insert(0, str(Path(__file__).parent.parent))
from app.database import AsyncSessionLocal
from app.models.celestial import CELESTIAL_BODIES
from app.models.db import CelestialBody, StaticData
from sqlalchemy import select
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def migrate_celestial_bodies(force: bool = False, skip_existing: bool = False):
"""Migrate CELESTIAL_BODIES dict to database"""
logger.info("=" * 60)
logger.info("Migrating celestial bodies...")
logger.info("=" * 60)
async with AsyncSessionLocal() as session:
# Check if data already exists
result = await session.execute(select(CelestialBody))
existing_count = len(result.scalars().all())
if existing_count > 0:
logger.warning(f"Found {existing_count} existing celestial bodies in database")
if skip_existing:
logger.info("Skipping celestial bodies migration (--skip-existing)")
return
if not force:
response = input("Do you want to overwrite? (yes/no): ")
if response.lower() not in ['yes', 'y']:
logger.info("Skipping celestial bodies migration")
return
else:
logger.info("Overwriting existing data (--force)")
# Delete existing data
from sqlalchemy import text
await session.execute(text("DELETE FROM celestial_bodies"))
logger.info(f"Deleted {existing_count} existing records")
# Insert new data
count = 0
for body_id, info in CELESTIAL_BODIES.items():
body = CelestialBody(
id=body_id,
name=info["name"],
name_zh=info.get("name_zh"),
type=info["type"],
description=info.get("description"),
extra_data={
"launch_date": info.get("launch_date"),
"status": info.get("status"),
} if "launch_date" in info or "status" in info else None
)
session.add(body)
count += 1
await session.commit()
logger.info(f"✓ Migrated {count} celestial bodies")
async def migrate_static_data(force: bool = False, skip_existing: bool = False):
"""Migrate frontend JSON files to database"""
logger.info("=" * 60)
logger.info("Migrating static data from JSON files...")
logger.info("=" * 60)
# Define JSON files to migrate
frontend_data_dir = Path(__file__).parent.parent.parent / "frontend" / "public" / "data"
json_files = {
"nearby-stars.json": "star",
"constellations.json": "constellation",
"galaxies.json": "galaxy",
}
async with AsyncSessionLocal() as session:
for filename, category in json_files.items():
file_path = frontend_data_dir / filename
if not file_path.exists():
logger.warning(f"File not found: {file_path}")
continue
# Load JSON data
with open(file_path, 'r', encoding='utf-8') as f:
data_list = json.load(f)
# Check if category data already exists
result = await session.execute(
select(StaticData).where(StaticData.category == category)
)
existing = result.scalars().all()
if existing:
logger.warning(f"Found {len(existing)} existing {category} records")
if skip_existing:
logger.info(f"Skipping {category} migration (--skip-existing)")
continue
if not force:
response = input(f"Overwrite {category} data? (yes/no): ")
if response.lower() not in ['yes', 'y']:
logger.info(f"Skipping {category} migration")
continue
else:
logger.info(f"Overwriting {category} data (--force)")
# Delete existing
for record in existing:
await session.delete(record)
# Insert new data
count = 0
for item in data_list:
static_item = StaticData(
category=category,
name=item.get("name", "Unknown"),
name_zh=item.get("name_zh"),
data=item
)
session.add(static_item)
count += 1
await session.commit()
logger.info(f"✓ Migrated {count} {category} records")
async def main():
"""Run all migrations"""
# Parse command line arguments
parser = argparse.ArgumentParser(description='Migrate data to PostgreSQL database')
group = parser.add_mutually_exclusive_group()
group.add_argument('--force', action='store_true', help='Overwrite existing data without prompting')
group.add_argument('--skip-existing', action='store_true', help='Skip migration if data already exists')
args = parser.parse_args()
logger.info("\n" + "=" * 60)
logger.info("Cosmo Data Migration")
logger.info("=" * 60 + "\n")
try:
# Migrate celestial bodies
await migrate_celestial_bodies(force=args.force, skip_existing=args.skip_existing)
# Migrate static data
await migrate_static_data(force=args.force, skip_existing=args.skip_existing)
logger.info("\n" + "=" * 60)
logger.info("✓ Migration completed successfully!")
logger.info("=" * 60)
except Exception as e:
logger.error(f"\n✗ Migration failed: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,143 @@
"""
Populate resources table with texture and model files
"""
import asyncio
import os
from pathlib import Path
from sqlalchemy.dialects.postgresql import insert as pg_insert
from app.database import get_db
from app.models.db.resource import Resource
# Mapping of texture files to celestial body IDs (use numeric Horizons IDs)
TEXTURE_MAPPING = {
"2k_sun.jpg": {"body_id": "10", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_mercury.jpg": {"body_id": "199", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_venus_surface.jpg": {"body_id": "299", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_venus_atmosphere.jpg": {"body_id": "299", "resource_type": "texture", "mime_type": "image/jpeg", "extra_data": {"layer": "atmosphere"}},
"2k_earth_daymap.jpg": {"body_id": "399", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_earth_nightmap.jpg": {"body_id": "399", "resource_type": "texture", "mime_type": "image/jpeg", "extra_data": {"layer": "night"}},
"2k_moon.jpg": {"body_id": "301", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_mars.jpg": {"body_id": "499", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_jupiter.jpg": {"body_id": "599", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_saturn.jpg": {"body_id": "699", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_saturn_ring_alpha.png": {"body_id": "699", "resource_type": "texture", "mime_type": "image/png", "extra_data": {"layer": "ring"}},
"2k_uranus.jpg": {"body_id": "799", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_neptune.jpg": {"body_id": "899", "resource_type": "texture", "mime_type": "image/jpeg"},
"2k_stars_milky_way.jpg": {"body_id": None, "resource_type": "texture", "mime_type": "image/jpeg", "extra_data": {"usage": "skybox"}},
}
# Mapping of model files to celestial body IDs (use numeric probe IDs)
MODEL_MAPPING = {
"voyager_1.glb": {"body_id": "-31", "resource_type": "model", "mime_type": "model/gltf-binary"},
"voyager_2.glb": {"body_id": "-32", "resource_type": "model", "mime_type": "model/gltf-binary"},
"juno.glb": {"body_id": "-61", "resource_type": "model", "mime_type": "model/gltf-binary"},
"parker_solar_probe.glb": {"body_id": "-96", "resource_type": "model", "mime_type": "model/gltf-binary"},
"cassini.glb": {"body_id": "-82", "resource_type": "model", "mime_type": "model/gltf-binary"},
}
async def populate_resources():
"""Populate resources table with texture and model files"""
# Get upload directory path
upload_dir = Path(__file__).parent.parent / "upload"
texture_dir = upload_dir / "texture"
model_dir = upload_dir / "model"
print(f"📂 Scanning upload directory: {upload_dir}")
print(f"📂 Texture directory: {texture_dir}")
print(f"📂 Model directory: {model_dir}")
async for session in get_db():
try:
# Process textures
print("\n🖼️ Processing textures...")
texture_count = 0
for filename, mapping in TEXTURE_MAPPING.items():
file_path = texture_dir / filename
if not file_path.exists():
print(f"⚠️ Warning: Texture file not found: {filename}")
continue
file_size = file_path.stat().st_size
# Prepare resource data
resource_data = {
"body_id": mapping["body_id"],
"resource_type": mapping["resource_type"],
"file_path": f"texture/{filename}",
"file_size": file_size,
"mime_type": mapping["mime_type"],
"extra_data": mapping.get("extra_data"),
}
# Use upsert to avoid duplicates
stmt = pg_insert(Resource).values(**resource_data)
stmt = stmt.on_conflict_do_update(
index_elements=['body_id', 'resource_type', 'file_path'],
set_={
'file_size': file_size,
'mime_type': mapping["mime_type"],
'extra_data': mapping.get("extra_data"),
}
)
await session.execute(stmt)
texture_count += 1
print(f"{filename} -> {mapping['body_id'] or 'global'} ({file_size} bytes)")
# Process models
print("\n🚀 Processing models...")
model_count = 0
for filename, mapping in MODEL_MAPPING.items():
file_path = model_dir / filename
if not file_path.exists():
print(f"⚠️ Warning: Model file not found: {filename}")
continue
file_size = file_path.stat().st_size
# Prepare resource data
resource_data = {
"body_id": mapping["body_id"],
"resource_type": mapping["resource_type"],
"file_path": f"model/{filename}",
"file_size": file_size,
"mime_type": mapping["mime_type"],
"extra_data": mapping.get("extra_data"),
}
# Use upsert to avoid duplicates
stmt = pg_insert(Resource).values(**resource_data)
stmt = stmt.on_conflict_do_update(
index_elements=['body_id', 'resource_type', 'file_path'],
set_={
'file_size': file_size,
'mime_type': mapping["mime_type"],
'extra_data': mapping.get("extra_data"),
}
)
await session.execute(stmt)
model_count += 1
print(f"{filename} -> {mapping['body_id']} ({file_size} bytes)")
# Commit all changes
await session.commit()
print(f"\n✨ Successfully populated resources table:")
print(f" 📊 Textures: {texture_count}")
print(f" 📊 Models: {model_count}")
print(f" 📊 Total: {texture_count + model_count}")
except Exception as e:
print(f"❌ Error populating resources: {e}")
await session.rollback()
raise
finally:
break
if __name__ == "__main__":
asyncio.run(populate_resources())

View File

@ -0,0 +1,223 @@
#!/usr/bin/env python3
"""
Historical Data Prefetch Script
This script prefetches historical position data for all celestial bodies
and stores them in the database for fast retrieval.
Usage:
# Prefetch last 12 months
python scripts/prefetch_historical_data.py --months 12
# Prefetch specific year-month
python scripts/prefetch_historical_data.py --year 2024 --month 1
# Prefetch a range
python scripts/prefetch_historical_data.py --start-year 2023 --start-month 1 --end-year 2023 --end-month 12
"""
import sys
import os
import asyncio
import argparse
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
# Add backend to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from app.database import get_db
from app.services.horizons import horizons_service
from app.services.db_service import position_service, celestial_body_service
async def prefetch_month(year: int, month: int, session):
"""
Prefetch data for a specific month
Args:
year: Year (e.g., 2023)
month: Month (1-12)
session: Database session
"""
# Calculate start and end of month
start_date = datetime(year, month, 1, 0, 0, 0)
if month == 12:
end_date = datetime(year + 1, 1, 1, 0, 0, 0)
else:
end_date = datetime(year, month + 1, 1, 0, 0, 0)
print(f"\n{'='*60}")
print(f"📅 Prefetching data for {year}-{month:02d}")
print(f" Period: {start_date.date()} to {end_date.date()}")
print(f"{'='*60}")
# Get all celestial bodies from database
all_bodies = await celestial_body_service.get_all_bodies(session)
total_bodies = len(all_bodies)
success_count = 0
skip_count = 0
error_count = 0
for idx, body in enumerate(all_bodies, 1):
body_id = body.id
body_name = body.name
try:
# Check if we already have data for this month
existing_positions = await position_service.get_positions_in_range(
body_id, start_date, end_date, session
)
if existing_positions and len(existing_positions) > 0:
print(f" [{idx}/{total_bodies}] ⏭️ {body_name:20s} - Already exists ({len(existing_positions)} positions)")
skip_count += 1
continue
print(f" [{idx}/{total_bodies}] 🔄 {body_name:20s} - Fetching...", end='', flush=True)
# Query NASA Horizons API for this month
# Sample every 7 days to reduce data volume
step = "7d"
if body_id == "10":
# Sun is always at origin
positions = [
{"time": start_date, "x": 0.0, "y": 0.0, "z": 0.0},
{"time": end_date, "x": 0.0, "y": 0.0, "z": 0.0},
]
elif body_id == "-82":
# Cassini mission ended 2017-09-15
if year < 2017 or (year == 2017 and month <= 9):
cassini_date = datetime(2017, 9, 15, 11, 58, 0)
positions_data = horizons_service.get_body_positions(
body_id, cassini_date, cassini_date, step
)
positions = [
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
for p in positions_data
]
else:
print(f" ⏭️ Mission ended", flush=True)
skip_count += 1
continue
else:
# Query other bodies
positions_data = horizons_service.get_body_positions(
body_id, start_date, end_date, step
)
positions = [
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
for p in positions_data
]
# Store in database
for pos_data in positions:
await position_service.save_position(
body_id=body_id,
time=pos_data["time"],
x=pos_data["x"],
y=pos_data["y"],
z=pos_data["z"],
source="nasa_horizons",
session=session,
)
print(f" ✅ Saved {len(positions)} positions", flush=True)
success_count += 1
# Small delay to avoid overwhelming NASA API
await asyncio.sleep(0.5)
except Exception as e:
print(f" ❌ Error: {str(e)}", flush=True)
error_count += 1
continue
print(f"\n{'='*60}")
print(f"📊 Summary for {year}-{month:02d}:")
print(f" ✅ Success: {success_count}")
print(f" ⏭️ Skipped: {skip_count}")
print(f" ❌ Errors: {error_count}")
print(f"{'='*60}\n")
return success_count, skip_count, error_count
async def main():
parser = argparse.ArgumentParser(description="Prefetch historical celestial data")
parser.add_argument("--months", type=int, help="Number of months to prefetch from now (default: 12)")
parser.add_argument("--year", type=int, help="Specific year to prefetch")
parser.add_argument("--month", type=int, help="Specific month to prefetch (1-12)")
parser.add_argument("--start-year", type=int, help="Start year for range")
parser.add_argument("--start-month", type=int, help="Start month for range (1-12)")
parser.add_argument("--end-year", type=int, help="End year for range")
parser.add_argument("--end-month", type=int, help="End month for range (1-12)")
args = parser.parse_args()
# Determine date range
months_to_fetch = []
if args.year and args.month:
# Single month
months_to_fetch.append((args.year, args.month))
elif args.start_year and args.start_month and args.end_year and args.end_month:
# Date range
current = datetime(args.start_year, args.start_month, 1)
end = datetime(args.end_year, args.end_month, 1)
while current <= end:
months_to_fetch.append((current.year, current.month))
current += relativedelta(months=1)
else:
# Default: last N months
months = args.months or 12
current = datetime.now()
for i in range(months):
past_date = current - relativedelta(months=i)
months_to_fetch.append((past_date.year, past_date.month))
months_to_fetch.reverse() # Start from oldest
if not months_to_fetch:
print("❌ No months to fetch. Please specify a valid date range.")
return
print(f"\n🚀 Historical Data Prefetch Script")
print(f"{'='*60}")
print(f"📅 Total months to fetch: {len(months_to_fetch)}")
print(f" From: {months_to_fetch[0][0]}-{months_to_fetch[0][1]:02d}")
print(f" To: {months_to_fetch[-1][0]}-{months_to_fetch[-1][1]:02d}")
print(f"{'='*60}\n")
total_success = 0
total_skip = 0
total_error = 0
async for session in get_db():
start_time = datetime.now()
for year, month in months_to_fetch:
success, skip, error = await prefetch_month(year, month, session)
total_success += success
total_skip += skip
total_error += error
end_time = datetime.now()
duration = end_time - start_time
print(f"\n{'='*60}")
print(f"🎉 Prefetch Complete!")
print(f"{'='*60}")
print(f"📊 Overall Summary:")
print(f" Total months processed: {len(months_to_fetch)}")
print(f" ✅ Total success: {total_success}")
print(f" ⏭️ Total skipped: {total_skip}")
print(f" ❌ Total errors: {total_error}")
print(f" ⏱️ Duration: {duration}")
print(f"{'='*60}\n")
break
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,27 @@
"""
Recreate resources table with unique constraint
"""
import asyncio
from app.database import engine
from app.models.db.resource import Resource
from sqlalchemy import text
async def recreate_resources_table():
"""Drop and recreate resources table"""
async with engine.begin() as conn:
# Drop the table
print("🗑️ Dropping resources table...")
await conn.execute(text("DROP TABLE IF EXISTS resources CASCADE"))
print("✓ Table dropped")
# Recreate the table
print("📦 Creating resources table with new schema...")
await conn.run_sync(Resource.metadata.create_all)
print("✓ Table created")
print("\n✨ Resources table recreated successfully!")
if __name__ == "__main__":
asyncio.run(recreate_resources_table())

View File

@ -0,0 +1,45 @@
"""
Reset admin user password to 'cosmo'
"""
import asyncio
import sys
sys.path.insert(0, '/Users/jiliu/WorkSpace/cosmo/backend')
from sqlalchemy import select, update
from app.database import AsyncSessionLocal
from app.models.db import User
async def reset_password():
# Pre-generated bcrypt hash for 'cosmo'
new_hash = '$2b$12$42d8/NAaYJlK8w/1yBd5uegdHlDkpC9XFtXYu2sWq0EXj48KAMZ0i'
async with AsyncSessionLocal() as session:
# Find admin user
result = await session.execute(
select(User).where(User.username == 'cosmo')
)
user = result.scalar_one_or_none()
if not user:
print("❌ Admin user 'cosmo' not found!")
return
print(f"Found user: {user.username}")
print(f"New password hash: {new_hash[:50]}...")
# Update password
await session.execute(
update(User)
.where(User.username == 'cosmo')
.values(password_hash=new_hash)
)
await session.commit()
print("✅ Admin password reset successfully!")
print("Username: cosmo")
print("Password: cosmo")
if __name__ == "__main__":
asyncio.run(reset_password())

40
scripts/run_sql.py 100644
View File

@ -0,0 +1,40 @@
import asyncio
import sys
from sqlalchemy import text
from app.database import get_db, init_db
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def run_sql_file(sql_file_path):
await init_db()
try:
with open(sql_file_path, 'r') as f:
sql_content = f.read()
# Split by semicolon to handle multiple statements if needed
# But sqlalchemy text() might handle it. Let's try executing as one block if possible,
# or split manually if it's simple.
statements = [s.strip() for s in sql_content.split(';') if s.strip()]
async for session in get_db():
for stmt in statements:
logger.info(f"Executing: {stmt[:50]}...")
await session.execute(text(stmt))
await session.commit()
logger.info("SQL execution completed successfully.")
except FileNotFoundError:
logger.error(f"File not found: {sql_file_path}")
except Exception as e:
logger.error(f"Error executing SQL: {e}")
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python -m scripts.run_sql <path_to_sql_file>")
sys.exit(1)
sql_file = sys.argv[1]
asyncio.run(run_sql_file(sql_file))

View File

@ -0,0 +1,217 @@
#!/usr/bin/env python3
"""
Seed initial admin user, roles, and menus
Creates:
1. Two roles: admin and user
2. Admin user: cosmo / cosmo
3. Admin menu structure
Usage:
python scripts/seed_admin.py
"""
import asyncio
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from sqlalchemy import select
from app.database import AsyncSessionLocal
from app.models.db import User, Role, Menu, RoleMenu
import bcrypt
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def hash_password(password: str) -> str:
"""Hash password using bcrypt"""
return bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
async def main():
"""Seed admin data"""
async with AsyncSessionLocal() as session:
try:
# 1. Create roles
logger.info("Creating roles...")
# Check if roles already exist
result = await session.execute(select(Role))
existing_roles = result.scalars().all()
if existing_roles:
logger.info(f"Roles already exist: {[r.name for r in existing_roles]}")
admin_role = next((r for r in existing_roles if r.name == 'admin'), None)
user_role = next((r for r in existing_roles if r.name == 'user'), None)
else:
admin_role = Role(
name='admin',
display_name='管理员',
description='系统管理员,拥有所有权限'
)
user_role = Role(
name='user',
display_name='普通用户',
description='普通用户,仅有基本访问权限'
)
session.add(admin_role)
session.add(user_role)
await session.flush()
logger.info(f"✓ Created roles: admin, user")
# 2. Create admin user
logger.info("Creating admin user...")
# Check if admin user already exists
result = await session.execute(
select(User).where(User.username == 'cosmo')
)
existing_user = result.scalar_one_or_none()
if existing_user:
logger.info(f"Admin user 'cosmo' already exists (id={existing_user.id})")
admin_user = existing_user
else:
admin_user = User(
username='cosmo',
password_hash=hash_password('cosmo'),
email='admin@cosmo.com',
full_name='Cosmo Administrator',
is_active=True
)
session.add(admin_user)
await session.flush()
# Assign admin role to user using direct insert to avoid lazy loading
from app.models.db.user import user_roles
await session.execute(
user_roles.insert().values(
user_id=admin_user.id,
role_id=admin_role.id
)
)
await session.flush()
logger.info(f"✓ Created admin user: cosmo / cosmo")
# 3. Create admin menus
logger.info("Creating admin menus...")
# Check if menus already exist
result = await session.execute(select(Menu))
existing_menus = result.scalars().all()
if existing_menus:
logger.info(f"Menus already exist ({len(existing_menus)} menus)")
else:
# Root menu items
dashboard_menu = Menu(
name='dashboard',
title='控制台',
icon='dashboard',
path='/admin/dashboard',
component='admin/Dashboard',
sort_order=1,
is_active=True,
description='系统控制台'
)
data_management_menu = Menu(
name='data_management',
title='数据管理',
icon='database',
path=None, # Parent menu, no direct path
component=None,
sort_order=2,
is_active=True,
description='数据管理模块'
)
session.add(dashboard_menu)
session.add(data_management_menu)
await session.flush()
# Sub-menu items under data_management
celestial_bodies_menu = Menu(
parent_id=data_management_menu.id,
name='celestial_bodies',
title='天体数据列表',
icon='planet',
path='/admin/celestial-bodies',
component='admin/CelestialBodies',
sort_order=1,
is_active=True,
description='查看和管理天体数据'
)
static_data_menu = Menu(
parent_id=data_management_menu.id,
name='static_data',
title='静态数据列表',
icon='data',
path='/admin/static-data',
component='admin/StaticData',
sort_order=2,
is_active=True,
description='查看和管理静态数据(星座、星系等)'
)
nasa_data_menu = Menu(
parent_id=data_management_menu.id,
name='nasa_data',
title='NASA数据下载管理',
icon='download',
path='/admin/nasa-data',
component='admin/NasaData',
sort_order=3,
is_active=True,
description='管理NASA Horizons数据下载'
)
session.add(celestial_bodies_menu)
session.add(static_data_menu)
session.add(nasa_data_menu)
await session.flush()
logger.info(f"✓ Created {5} menu items")
# 4. Assign all menus to admin role
logger.info("Assigning menus to admin role...")
all_menus = [
dashboard_menu,
data_management_menu,
celestial_bodies_menu,
static_data_menu,
nasa_data_menu
]
for menu in all_menus:
role_menu = RoleMenu(role_id=admin_role.id, menu_id=menu.id)
session.add(role_menu)
await session.flush()
logger.info(f"✓ Assigned {len(all_menus)} menus to admin role")
await session.commit()
logger.info("\n" + "=" * 60)
logger.info("Admin data seeded successfully!")
logger.info("=" * 60)
logger.info("Admin credentials:")
logger.info(" Username: cosmo")
logger.info(" Password: cosmo")
logger.info("=" * 60)
except Exception as e:
await session.rollback()
logger.error(f"Error seeding admin data: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,83 @@
import asyncio
from sqlalchemy.ext.asyncio import AsyncSession
from app.database import get_db, init_db
from app.models.db import StaticData
from datetime import datetime
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def seed_asteroid_belts():
await init_db() # Ensure database is initialized
async for session in get_db(): # Use async for to get the session
logger.info("Seeding asteroid and Kuiper belt static data...")
belts_data = [
{
"category": "asteroid_belt",
"name": "Main Asteroid Belt",
"name_zh": "主小行星带",
"data": {
"innerRadiusAU": 2.2,
"outerRadiusAU": 3.2,
"count": 1500,
"color": "#665544",
"size": 0.1,
"opacity": 0.4,
"heightScale": 0.05,
"rotationSpeed": 0.02
}
},
{
"category": "kuiper_belt",
"name": "Kuiper Belt",
"name_zh": "柯伊伯带",
"data": {
"innerRadiusAU": 30,
"outerRadiusAU": 50,
"count": 2500,
"color": "#AABBDD",
"size": 0.2,
"opacity": 0.3,
"heightScale": 0.1,
"rotationSpeed": 0.005
}
}
]
for belt_item in belts_data:
# Check if an item with the same category and name already exists
existing_item = await session.execute(
StaticData.__table__.select().where(
StaticData.category == belt_item["category"],
StaticData.name == belt_item["name"]
)
)
if existing_item.scalar_one_or_none():
logger.info(f"Static data for {belt_item['name']} already exists. Updating...")
stmt = StaticData.__table__.update().where(
StaticData.category == belt_item["category"],
StaticData.name == belt_item["name"]
).values(
name_zh=belt_item["name_zh"],
data=belt_item["data"],
updated_at=datetime.utcnow()
)
await session.execute(stmt)
else:
logger.info(f"Adding static data for {belt_item['name']}...")
static_data_entry = StaticData(
category=belt_item["category"],
name=belt_item["name"],
name_zh=belt_item["name_zh"],
data=belt_item["data"]
)
session.add(static_data_entry)
await session.commit()
logger.info("Asteroid and Kuiper belt static data seeding complete.")
if __name__ == "__main__":
asyncio.run(seed_asteroid_belts())

View File

@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
Seed celestial bodies script
Adds all celestial bodies from CELESTIAL_BODIES to the database
and fetches their current positions from NASA Horizons.
Usage:
python scripts/seed_celestial_bodies.py
"""
import sys
import os
import asyncio
from datetime import datetime
# Add backend to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from app.database import get_db
from app.services.horizons import horizons_service
from app.services.db_service import celestial_body_service, position_service
from app.models.celestial import CELESTIAL_BODIES
async def seed_bodies():
"""Seed celestial bodies into database"""
print("\n" + "=" * 60)
print("🌌 Seeding Celestial Bodies")
print("=" * 60)
async for session in get_db():
success_count = 0
skip_count = 0
error_count = 0
total = len(CELESTIAL_BODIES)
for idx, (body_id, info) in enumerate(CELESTIAL_BODIES.items(), 1):
body_name = info["name"]
try:
# Check if body already exists
existing_body = await celestial_body_service.get_body_by_id(body_id, session)
if existing_body:
print(f" [{idx}/{total}] ⏭️ {body_name:20s} - Already exists")
skip_count += 1
continue
print(f" [{idx}/{total}] 🔄 {body_name:20s} - Creating...", end='', flush=True)
# Create body record
body_data = {
"id": body_id,
"name": info["name"],
"name_zh": info.get("name_zh"),
"type": info["type"],
"description": info.get("description"),
"extra_data": {
"launch_date": info.get("launch_date"),
"status": info.get("status"),
}
}
await celestial_body_service.create_body(body_data, session)
print(f" ✅ Created", flush=True)
success_count += 1
except Exception as e:
print(f" ❌ Error: {str(e)}", flush=True)
error_count += 1
continue
print(f"\n{'='*60}")
print(f"📊 Summary:")
print(f" ✅ Created: {success_count}")
print(f" ⏭️ Skipped: {skip_count}")
print(f" ❌ Errors: {error_count}")
print(f"{'='*60}\n")
break
async def sync_current_positions():
"""Fetch and store current positions for all bodies"""
print("\n" + "=" * 60)
print("📍 Syncing Current Positions")
print("=" * 60)
async for session in get_db():
now = datetime.utcnow()
success_count = 0
skip_count = 0
error_count = 0
all_bodies = await celestial_body_service.get_all_bodies(session)
total = len(all_bodies)
for idx, body in enumerate(all_bodies, 1):
body_id = body.id
body_name = body.name
try:
# Check if we have recent position (within last hour)
from datetime import timedelta
recent_time = now - timedelta(hours=1)
existing_positions = await position_service.get_positions(
body_id, recent_time, now, session
)
if existing_positions and len(existing_positions) > 0:
print(f" [{idx}/{total}] ⏭️ {body_name:20s} - Recent data exists")
skip_count += 1
continue
print(f" [{idx}/{total}] 🔄 {body_name:20s} - Fetching...", end='', flush=True)
# Special handling for Sun
if body_id == "10":
positions_data = [{"time": now, "x": 0.0, "y": 0.0, "z": 0.0}]
# Special handling for Cassini
elif body_id == "-82":
cassini_date = datetime(2017, 9, 15, 11, 58, 0)
positions_data = horizons_service.get_body_positions(
body_id, cassini_date, cassini_date
)
positions_data = [
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
for p in positions_data
]
else:
# Query current position
positions_data = horizons_service.get_body_positions(
body_id, now, now
)
positions_data = [
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
for p in positions_data
]
# Store positions
for pos_data in positions_data:
await position_service.save_position(
body_id=body_id,
time=pos_data["time"],
x=pos_data["x"],
y=pos_data["y"],
z=pos_data["z"],
source="nasa_horizons",
session=session,
)
print(f" ✅ Saved {len(positions_data)} position(s)", flush=True)
success_count += 1
# Small delay to avoid overwhelming NASA API
await asyncio.sleep(0.5)
except Exception as e:
print(f" ❌ Error: {str(e)}", flush=True)
error_count += 1
continue
print(f"\n{'='*60}")
print(f"📊 Summary:")
print(f" ✅ Success: {success_count}")
print(f" ⏭️ Skipped: {skip_count}")
print(f" ❌ Errors: {error_count}")
print(f"{'='*60}\n")
break
async def main():
print("\n🚀 Celestial Bodies Database Seeding")
print("=" * 60)
print("This script will:")
print(" 1. Add all celestial bodies to the database")
print(" 2. Fetch and store their current positions")
print("=" * 60)
# Seed celestial bodies
await seed_bodies()
# Sync current positions
await sync_current_positions()
print("\n🎉 Seeding complete!")
if __name__ == "__main__":
asyncio.run(main())

221
scripts/setup.sh 100755
View File

@ -0,0 +1,221 @@
#!/bin/bash
# Cosmo 后端一键初始化脚本
set -e # 遇到错误立即退出
# 颜色定义
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# 日志函数
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# 打印标题
print_header() {
echo "================================================================="
echo " Cosmo 后端初始化脚本"
echo "================================================================="
echo ""
}
# 检查 Python
check_python() {
log_info "检查 Python 环境..."
if ! command -v python3 &> /dev/null; then
log_error "未找到 Python 3请先安装 Python 3.9+"
exit 1
fi
PYTHON_VERSION=$(python3 --version | awk '{print $2}')
log_success "Python 版本: $PYTHON_VERSION"
}
# 检查 PostgreSQL
check_postgresql() {
log_info "检查 PostgreSQL..."
if ! command -v psql &> /dev/null; then
log_error "未找到 psql 命令,请先安装 PostgreSQL"
exit 1
fi
# 尝试连接 PostgreSQL
if psql -U postgres -c "SELECT version();" &> /dev/null; then
log_success "PostgreSQL 连接成功"
else
log_error "无法连接到 PostgreSQL请检查"
log_error " 1. PostgreSQL 是否正在运行"
log_error " 2. 账号密码是否为 postgres/postgres"
log_error " 3. 是否允许本地连接"
exit 1
fi
}
# 检查 Redis
check_redis() {
log_info "检查 Redis..."
if ! command -v redis-cli &> /dev/null; then
log_warning "未找到 redis-cli 命令"
log_warning "Redis 是可选的,但建议安装以获得更好的缓存性能"
return
fi
# 尝试连接 Redis
if redis-cli ping &> /dev/null; then
log_success "Redis 连接成功"
else
log_warning "无法连接到 Redis"
log_warning "应用会自动降级为仅使用内存缓存"
fi
}
# 检查依赖
check_dependencies() {
log_info "检查 Python 依赖包..."
cd "$(dirname "$0")/.." # 切换到 backend 目录
# 检查 requirements.txt 是否存在
if [ ! -f "requirements.txt" ]; then
log_error "未找到 requirements.txt 文件"
exit 1
fi
# 检查关键依赖是否已安装
if ! python3 -c "import fastapi" &> /dev/null; then
log_warning "依赖包未完全安装,正在安装..."
pip install -r requirements.txt
log_success "依赖包安装完成"
else
log_success "依赖包已安装"
fi
}
# 检查 .env 文件
check_env_file() {
log_info "检查配置文件..."
cd "$(dirname "$0")/.." # 确保在 backend 目录
if [ ! -f ".env" ]; then
log_warning ".env 文件不存在,从 .env.example 创建..."
if [ -f ".env.example" ]; then
cp .env.example .env
log_success ".env 文件创建成功"
else
log_error "未找到 .env.example 文件"
exit 1
fi
else
log_success ".env 文件已存在"
fi
}
# 创建数据库
create_database() {
log_info "创建数据库..."
cd "$(dirname "$0")/.." # 确保在 backend 目录
if python3 scripts/create_db.py; then
log_success "数据库创建完成"
else
log_error "数据库创建失败"
exit 1
fi
}
# 初始化数据库表
init_database() {
log_info "初始化数据库表结构..."
cd "$(dirname "$0")/.." # 确保在 backend 目录
if python3 scripts/init_db.py; then
log_success "数据库表结构初始化完成"
else
log_error "数据库表结构初始化失败"
exit 1
fi
}
# 创建上传目录
create_upload_dir() {
log_info "创建上传目录..."
cd "$(dirname "$0")/.." # 确保在 backend 目录
if [ ! -d "upload" ]; then
mkdir -p upload
log_success "上传目录创建成功: upload/"
else
log_success "上传目录已存在: upload/"
fi
}
# 打印完成信息
print_completion() {
echo ""
echo "================================================================="
echo -e "${GREEN} ✓ 初始化完成!${NC}"
echo "================================================================="
echo ""
echo "启动服务:"
echo " cd backend"
echo " python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000"
echo ""
echo "或者:"
echo " python app/main.py"
echo ""
echo "访问:"
echo " - API 文档: http://localhost:8000/docs"
echo " - 健康检查: http://localhost:8000/health"
echo " - 根路径: http://localhost:8000/"
echo ""
echo "================================================================="
}
# 主函数
main() {
print_header
# 1. 检查环境
check_python
check_postgresql
check_redis
# 2. 安装依赖
check_dependencies
# 3. 配置文件
check_env_file
# 4. 数据库初始化
create_database
init_database
# 5. 创建必要目录
create_upload_dir
# 6. 完成
print_completion
}
# 执行主函数
main

View File

@ -0,0 +1,49 @@
"""
Test fetching Pluto position from NASA Horizons
"""
import asyncio
from datetime import datetime, UTC
from app.services.horizons import HorizonsService
async def test_pluto():
"""Test if we can fetch Pluto's position"""
print("🔍 Testing Pluto position fetch from NASA Horizons API...")
horizons = HorizonsService()
try:
# Fetch current position for Pluto (ID: 999)
now = datetime.now(UTC)
positions = horizons.get_body_positions(
body_id="999",
start_time=now,
end_time=now,
step="1d"
)
if positions:
print(f"\n✅ Successfully fetched Pluto position!")
print(f" Time: {positions[0].time}")
print(f" Position (AU):")
print(f" X: {positions[0].x:.4f}")
print(f" Y: {positions[0].y:.4f}")
print(f" Z: {positions[0].z:.4f}")
# Calculate distance from Sun
import math
distance = math.sqrt(
positions[0].x**2 +
positions[0].y**2 +
positions[0].z**2
)
print(f" Distance from Sun: {distance:.2f} AU")
else:
print("❌ No position data returned")
except Exception as e:
print(f"❌ Error fetching Pluto position: {e}")
if __name__ == "__main__":
asyncio.run(test_pluto())

View File

@ -0,0 +1,6 @@
-- Remove the old constraint
ALTER TABLE static_data DROP CONSTRAINT IF EXISTS chk_category;
-- Add the updated constraint
ALTER TABLE static_data ADD CONSTRAINT chk_category
CHECK (category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster', 'asteroid_belt', 'kuiper_belt'));

View File

@ -0,0 +1,623 @@
#!/usr/bin/env python3
"""
Update static_data table with expanded astronomical data
"""
import asyncio
import sys
import os
# Add parent directory to path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from app.database import get_db
from app.services.db_service import static_data_service
from app.models.db import StaticData
from sqlalchemy import select, update, insert
from sqlalchemy.dialects.postgresql import insert as pg_insert
# Expanded constellation data (15 constellations)
CONSTELLATIONS = [
{
"name": "Orion",
"name_zh": "猎户座",
"data": {
"stars": [
{"name": "Betelgeuse", "ra": 88.79, "dec": 7.41},
{"name": "Bellatrix", "ra": 81.28, "dec": 6.35},
{"name": "Alnitak", "ra": 85.19, "dec": -1.94},
{"name": "Alnilam", "ra": 84.05, "dec": -1.20},
{"name": "Mintaka", "ra": 83.00, "dec": -0.30},
{"name": "Saiph", "ra": 86.94, "dec": -9.67},
{"name": "Rigel", "ra": 78.63, "dec": -8.20}
],
"lines": [[0, 1], [1, 2], [2, 3], [3, 4], [2, 5], [5, 6]]
}
},
{
"name": "Ursa Major",
"name_zh": "大熊座",
"data": {
"stars": [
{"name": "Dubhe", "ra": 165.93, "dec": 61.75},
{"name": "Merak", "ra": 165.46, "dec": 56.38},
{"name": "Phecda", "ra": 178.46, "dec": 53.69},
{"name": "Megrez", "ra": 183.86, "dec": 57.03},
{"name": "Alioth", "ra": 193.51, "dec": 55.96},
{"name": "Mizar", "ra": 200.98, "dec": 54.93},
{"name": "Alkaid", "ra": 206.89, "dec": 49.31}
],
"lines": [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
}
},
{
"name": "Cassiopeia",
"name_zh": "仙后座",
"data": {
"stars": [
{"name": "Caph", "ra": 2.29, "dec": 59.15},
{"name": "Schedar", "ra": 10.13, "dec": 56.54},
{"name": "Navi", "ra": 14.18, "dec": 60.72},
{"name": "Ruchbah", "ra": 21.45, "dec": 60.24},
{"name": "Segin", "ra": 25.65, "dec": 63.67}
],
"lines": [[0, 1], [1, 2], [2, 3], [3, 4]]
}
},
{
"name": "Leo",
"name_zh": "狮子座",
"data": {
"stars": [
{"name": "Regulus", "ra": 152.09, "dec": 11.97},
{"name": "Denebola", "ra": 177.26, "dec": 14.57},
{"name": "Algieba", "ra": 154.99, "dec": 19.84},
{"name": "Zosma", "ra": 168.53, "dec": 20.52},
{"name": "Chertan", "ra": 173.95, "dec": 15.43}
],
"lines": [[0, 2], [2, 3], [3, 4], [4, 1], [1, 0]]
}
},
{
"name": "Scorpius",
"name_zh": "天蝎座",
"data": {
"stars": [
{"name": "Antares", "ra": 247.35, "dec": -26.43},
{"name": "Shaula", "ra": 263.40, "dec": -37.10},
{"name": "Sargas", "ra": 264.33, "dec": -43.00},
{"name": "Dschubba", "ra": 240.08, "dec": -22.62},
{"name": "Lesath", "ra": 262.69, "dec": -37.29}
],
"lines": [[3, 0], [0, 1], [1, 4], [1, 2]]
}
},
{
"name": "Cygnus",
"name_zh": "天鹅座",
"data": {
"stars": [
{"name": "Deneb", "ra": 310.36, "dec": 45.28},
{"name": "Sadr", "ra": 305.56, "dec": 40.26},
{"name": "Albireo", "ra": 292.68, "dec": 27.96},
{"name": "Delta Cygni", "ra": 296.24, "dec": 45.13},
{"name": "Gienah", "ra": 314.29, "dec": 33.97}
],
"lines": [[0, 1], [1, 2], [1, 3], [1, 4]]
}
},
{
"name": "Aquila",
"name_zh": "天鹰座",
"data": {
"stars": [
{"name": "Altair", "ra": 297.70, "dec": 8.87},
{"name": "Tarazed", "ra": 296.56, "dec": 10.61},
{"name": "Alshain", "ra": 298.83, "dec": 6.41},
{"name": "Deneb el Okab", "ra": 304.48, "dec": 15.07}
],
"lines": [[1, 0], [0, 2], [0, 3]]
}
},
{
"name": "Lyra",
"name_zh": "天琴座",
"data": {
"stars": [
{"name": "Vega", "ra": 279.23, "dec": 38.78},
{"name": "Sheliak", "ra": 282.52, "dec": 33.36},
{"name": "Sulafat", "ra": 284.74, "dec": 32.69},
{"name": "Delta Lyrae", "ra": 283.82, "dec": 36.90}
],
"lines": [[0, 3], [3, 1], [1, 2], [2, 0]]
}
},
{
"name": "Pegasus",
"name_zh": "飞马座",
"data": {
"stars": [
{"name": "Markab", "ra": 346.19, "dec": 15.21},
{"name": "Scheat", "ra": 345.94, "dec": 28.08},
{"name": "Algenib", "ra": 3.31, "dec": 15.18},
{"name": "Enif", "ra": 326.05, "dec": 9.88}
],
"lines": [[0, 1], [1, 2], [2, 0], [0, 3]]
}
},
{
"name": "Andromeda",
"name_zh": "仙女座",
"data": {
"stars": [
{"name": "Alpheratz", "ra": 2.10, "dec": 29.09},
{"name": "Mirach", "ra": 17.43, "dec": 35.62},
{"name": "Almach", "ra": 30.97, "dec": 42.33},
{"name": "Delta Andromedae", "ra": 8.78, "dec": 30.86}
],
"lines": [[0, 3], [3, 1], [1, 2]]
}
},
{
"name": "Taurus",
"name_zh": "金牛座",
"data": {
"stars": [
{"name": "Aldebaran", "ra": 68.98, "dec": 16.51},
{"name": "Elnath", "ra": 81.57, "dec": 28.61},
{"name": "Alcyone", "ra": 56.87, "dec": 24.11},
{"name": "Zeta Tauri", "ra": 84.41, "dec": 21.14}
],
"lines": [[0, 1], [0, 2], [1, 3]]
}
},
{
"name": "Gemini",
"name_zh": "双子座",
"data": {
"stars": [
{"name": "Pollux", "ra": 116.33, "dec": 28.03},
{"name": "Castor", "ra": 113.65, "dec": 31.89},
{"name": "Alhena", "ra": 99.43, "dec": 16.40},
{"name": "Mebsuta", "ra": 100.98, "dec": 25.13}
],
"lines": [[0, 1], [0, 2], [1, 3], [3, 2]]
}
},
{
"name": "Virgo",
"name_zh": "室女座",
"data": {
"stars": [
{"name": "Spica", "ra": 201.30, "dec": -11.16},
{"name": "Porrima", "ra": 190.42, "dec": 1.76},
{"name": "Vindemiatrix", "ra": 195.54, "dec": 10.96},
{"name": "Heze", "ra": 211.67, "dec": -0.67}
],
"lines": [[2, 1], [1, 0], [0, 3]]
}
},
{
"name": "Sagittarius",
"name_zh": "人马座",
"data": {
"stars": [
{"name": "Kaus Australis", "ra": 276.04, "dec": -34.38},
{"name": "Nunki", "ra": 283.82, "dec": -26.30},
{"name": "Ascella", "ra": 290.97, "dec": -29.88},
{"name": "Kaus Media", "ra": 276.99, "dec": -29.83},
{"name": "Kaus Borealis", "ra": 279.23, "dec": -25.42}
],
"lines": [[0, 3], [3, 4], [4, 1], [1, 2]]
}
},
{
"name": "Capricornus",
"name_zh": "摩羯座",
"data": {
"stars": [
{"name": "Deneb Algedi", "ra": 326.76, "dec": -16.13},
{"name": "Dabih", "ra": 305.25, "dec": -14.78},
{"name": "Nashira", "ra": 325.02, "dec": -16.66},
{"name": "Algedi", "ra": 304.51, "dec": -12.51}
],
"lines": [[3, 1], [1, 2], [2, 0]]
}
}
]
# Expanded galaxy data (12 galaxies)
GALAXIES = [
{
"name": "Andromeda Galaxy",
"name_zh": "仙女座星系",
"data": {
"type": "spiral",
"distance_mly": 2.537,
"ra": 10.68,
"dec": 41.27,
"magnitude": 3.44,
"diameter_kly": 220,
"color": "#CCDDFF"
}
},
{
"name": "Triangulum Galaxy",
"name_zh": "三角座星系",
"data": {
"type": "spiral",
"distance_mly": 2.73,
"ra": 23.46,
"dec": 30.66,
"magnitude": 5.72,
"diameter_kly": 60,
"color": "#AACCEE"
}
},
{
"name": "Large Magellanic Cloud",
"name_zh": "大麦哲伦云",
"data": {
"type": "irregular",
"distance_mly": 0.163,
"ra": 80.89,
"dec": -69.76,
"magnitude": 0.9,
"diameter_kly": 14,
"color": "#DDCCFF"
}
},
{
"name": "Small Magellanic Cloud",
"name_zh": "小麦哲伦云",
"data": {
"type": "irregular",
"distance_mly": 0.197,
"ra": 12.80,
"dec": -73.15,
"magnitude": 2.7,
"diameter_kly": 7,
"color": "#CCBBEE"
}
},
{
"name": "Milky Way Center",
"name_zh": "银河系中心",
"data": {
"type": "galactic_center",
"distance_mly": 0.026,
"ra": 266.42,
"dec": -29.01,
"magnitude": -1,
"diameter_kly": 100,
"color": "#FFFFAA"
}
},
{
"name": "Whirlpool Galaxy",
"name_zh": "漩涡星系",
"data": {
"type": "spiral",
"distance_mly": 23,
"ra": 202.47,
"dec": 47.20,
"magnitude": 8.4,
"diameter_kly": 76,
"color": "#AADDFF"
}
},
{
"name": "Sombrero Galaxy",
"name_zh": "草帽星系",
"data": {
"type": "spiral",
"distance_mly": 29.3,
"ra": 189.99,
"dec": -11.62,
"magnitude": 8.0,
"diameter_kly": 50,
"color": "#FFDDAA"
}
},
{
"name": "Pinwheel Galaxy",
"name_zh": "风车星系",
"data": {
"type": "spiral",
"distance_mly": 21,
"ra": 210.80,
"dec": 54.35,
"magnitude": 7.9,
"diameter_kly": 170,
"color": "#BBDDFF"
}
},
{
"name": "Bode's Galaxy",
"name_zh": "波德星系",
"data": {
"type": "spiral",
"distance_mly": 11.8,
"ra": 148.97,
"dec": 69.07,
"magnitude": 6.9,
"diameter_kly": 90,
"color": "#CCDDFF"
}
},
{
"name": "Cigar Galaxy",
"name_zh": "雪茄星系",
"data": {
"type": "starburst",
"distance_mly": 11.5,
"ra": 148.97,
"dec": 69.68,
"magnitude": 8.4,
"diameter_kly": 37,
"color": "#FFCCAA"
}
},
{
"name": "Centaurus A",
"name_zh": "半人马座A",
"data": {
"type": "elliptical",
"distance_mly": 13.7,
"ra": 201.37,
"dec": -43.02,
"magnitude": 6.8,
"diameter_kly": 60,
"color": "#FFDDCC"
}
},
{
"name": "Sculptor Galaxy",
"name_zh": "玉夫座星系",
"data": {
"type": "spiral",
"distance_mly": 11.4,
"ra": 15.15,
"dec": -25.29,
"magnitude": 7.2,
"diameter_kly": 90,
"color": "#CCDDEE"
}
}
]
# Nebula data (12 nebulae)
NEBULAE = [
{
"name": "Orion Nebula",
"name_zh": "猎户座大星云",
"data": {
"type": "emission",
"distance_ly": 1344,
"ra": 83.82,
"dec": -5.39,
"magnitude": 4.0,
"diameter_ly": 24,
"color": "#FF6B9D"
}
},
{
"name": "Eagle Nebula",
"name_zh": "鹰状星云",
"data": {
"type": "emission",
"distance_ly": 7000,
"ra": 274.70,
"dec": -13.80,
"magnitude": 6.0,
"diameter_ly": 70,
"color": "#FF8B7D"
}
},
{
"name": "Crab Nebula",
"name_zh": "蟹状星云",
"data": {
"type": "supernova_remnant",
"distance_ly": 6500,
"ra": 83.63,
"dec": 22.01,
"magnitude": 8.4,
"diameter_ly": 11,
"color": "#FFAA66"
}
},
{
"name": "Ring Nebula",
"name_zh": "环状星云",
"data": {
"type": "planetary",
"distance_ly": 2300,
"ra": 283.40,
"dec": 33.03,
"magnitude": 8.8,
"diameter_ly": 1,
"color": "#66DDFF"
}
},
{
"name": "Helix Nebula",
"name_zh": "螺旋星云",
"data": {
"type": "planetary",
"distance_ly": 700,
"ra": 337.41,
"dec": -20.84,
"magnitude": 7.6,
"diameter_ly": 2.5,
"color": "#88CCFF"
}
},
{
"name": "Lagoon Nebula",
"name_zh": "礁湖星云",
"data": {
"type": "emission",
"distance_ly": 4100,
"ra": 270.93,
"dec": -24.38,
"magnitude": 6.0,
"diameter_ly": 55,
"color": "#FF99AA"
}
},
{
"name": "Horsehead Nebula",
"name_zh": "马头星云",
"data": {
"type": "dark",
"distance_ly": 1500,
"ra": 85.30,
"dec": -2.46,
"magnitude": 10.0,
"diameter_ly": 3.5,
"color": "#886655"
}
},
{
"name": "Eta Carinae Nebula",
"name_zh": "船底座η星云",
"data": {
"type": "emission",
"distance_ly": 7500,
"ra": 161.26,
"dec": -59.87,
"magnitude": 3.0,
"diameter_ly": 300,
"color": "#FFAACC"
}
},
{
"name": "North America Nebula",
"name_zh": "北美洲星云",
"data": {
"type": "emission",
"distance_ly": 1600,
"ra": 312.95,
"dec": 44.32,
"magnitude": 4.0,
"diameter_ly": 50,
"color": "#FF7788"
}
},
{
"name": "Trifid Nebula",
"name_zh": "三叶星云",
"data": {
"type": "emission",
"distance_ly": 5200,
"ra": 270.36,
"dec": -23.03,
"magnitude": 6.3,
"diameter_ly": 25,
"color": "#FF99DD"
}
},
{
"name": "Dumbbell Nebula",
"name_zh": "哑铃星云",
"data": {
"type": "planetary",
"distance_ly": 1360,
"ra": 299.90,
"dec": 22.72,
"magnitude": 7.5,
"diameter_ly": 1.44,
"color": "#77DDFF"
}
},
{
"name": "Veil Nebula",
"name_zh": "面纱星云",
"data": {
"type": "supernova_remnant",
"distance_ly": 2400,
"ra": 312.92,
"dec": 30.72,
"magnitude": 7.0,
"diameter_ly": 110,
"color": "#AADDFF"
}
}
]
async def update_static_data():
"""Update static_data table with expanded astronomical data"""
print("=" * 60)
print("Updating static_data table")
print("=" * 60)
async for session in get_db():
# Update constellations
print(f"\nUpdating {len(CONSTELLATIONS)} constellations...")
for const in CONSTELLATIONS:
stmt = pg_insert(StaticData).values(
category="constellation",
name=const["name"],
name_zh=const["name_zh"],
data=const["data"]
)
stmt = stmt.on_conflict_do_update(
index_elements=['category', 'name'],
set_={
'name_zh': const["name_zh"],
'data': const["data"]
}
)
await session.execute(stmt)
print(f"{const['name']} ({const['name_zh']})")
# Update galaxies
print(f"\nUpdating {len(GALAXIES)} galaxies...")
for galaxy in GALAXIES:
stmt = pg_insert(StaticData).values(
category="galaxy",
name=galaxy["name"],
name_zh=galaxy["name_zh"],
data=galaxy["data"]
)
stmt = stmt.on_conflict_do_update(
index_elements=['category', 'name'],
set_={
'name_zh': galaxy["name_zh"],
'data': galaxy["data"]
}
)
await session.execute(stmt)
print(f"{galaxy['name']} ({galaxy['name_zh']})")
# Insert nebulae
print(f"\nInserting {len(NEBULAE)} nebulae...")
for nebula in NEBULAE:
stmt = pg_insert(StaticData).values(
category="nebula",
name=nebula["name"],
name_zh=nebula["name_zh"],
data=nebula["data"]
)
stmt = stmt.on_conflict_do_update(
index_elements=['category', 'name'],
set_={
'name_zh': nebula["name_zh"],
'data': nebula["data"]
}
)
await session.execute(stmt)
print(f"{nebula['name']} ({nebula['name_zh']})")
await session.commit()
break # Only use first session
print("\n" + "=" * 60)
print("✓ Static data update complete!")
print("=" * 60)
if __name__ == "__main__":
asyncio.run(update_static_data())

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 433 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 432 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 452 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 879 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 487 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 733 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 852 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 236 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 195 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

Some files were not shown because too many files have changed in this diff Show More