前端功能初步完成
|
|
@ -7,3 +7,23 @@ CORS_ORIGINS=http://localhost:5173,http://localhost:3000
|
|||
|
||||
# Cache Settings
|
||||
CACHE_TTL_DAYS=3
|
||||
|
||||
# Database Settings (PostgreSQL)
|
||||
DATABASE_HOST=localhost
|
||||
DATABASE_PORT=5432
|
||||
DATABASE_NAME=cosmo_db
|
||||
DATABASE_USER=postgres
|
||||
DATABASE_PASSWORD=postgres
|
||||
DATABASE_POOL_SIZE=20
|
||||
DATABASE_MAX_OVERFLOW=10
|
||||
|
||||
# Redis Settings
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_DB=0
|
||||
REDIS_PASSWORD=
|
||||
REDIS_MAX_CONNECTIONS=50
|
||||
|
||||
# File Upload Settings
|
||||
UPLOAD_DIR=upload
|
||||
MAX_UPLOAD_SIZE=10485760 # 10MB in bytes
|
||||
|
|
|
|||
|
|
@ -0,0 +1,121 @@
|
|||
# 后台管理系统 - 进度报告
|
||||
|
||||
## 已完成的工作
|
||||
|
||||
### 1. 数据库设计和初始化 ✅
|
||||
|
||||
#### 创建的数据库表:
|
||||
- **users** - 用户表
|
||||
- id (主键)
|
||||
- username (用户名,唯一)
|
||||
- password_hash (密码哈希)
|
||||
- email (邮箱)
|
||||
- full_name (全名)
|
||||
- is_active (激活状态)
|
||||
- created_at, updated_at, last_login_at (时间戳)
|
||||
|
||||
- **roles** - 角色表
|
||||
- id (主键)
|
||||
- name (角色名,如 'admin', 'user')
|
||||
- display_name (显示名称)
|
||||
- description (描述)
|
||||
- created_at, updated_at
|
||||
|
||||
- **user_roles** - 用户-角色关联表 (多对多)
|
||||
- user_id, role_id (复合主键)
|
||||
- created_at
|
||||
|
||||
- **menus** - 菜单表
|
||||
- id (主键)
|
||||
- parent_id (父菜单ID,支持树形结构)
|
||||
- name (菜单名)
|
||||
- title (显示标题)
|
||||
- icon (图标名)
|
||||
- path (路由路径)
|
||||
- component (组件路径)
|
||||
- sort_order (排序)
|
||||
- is_active (激活状态)
|
||||
- description (描述)
|
||||
- created_at, updated_at
|
||||
|
||||
- **role_menus** - 角色-菜单关联表
|
||||
- id (主键)
|
||||
- role_id, menu_id
|
||||
- created_at
|
||||
|
||||
### 2. 初始化数据 ✅
|
||||
|
||||
#### 角色数据:
|
||||
- **admin** - 管理员角色(拥有所有权限)
|
||||
- **user** - 普通用户角色(基本访问权限)
|
||||
|
||||
#### 管理员用户:
|
||||
- 用户名:`cosmo`
|
||||
- 密码:`cosmo`
|
||||
- 邮箱:admin@cosmo.com
|
||||
- 角色:admin
|
||||
|
||||
#### 菜单结构:
|
||||
```
|
||||
├── 控制台 (/admin/dashboard)
|
||||
└── 数据管理 (父菜单)
|
||||
├── 天体数据列表 (/admin/celestial-bodies)
|
||||
├── 静态数据列表 (/admin/static-data)
|
||||
└── NASA数据下载管理 (/admin/nasa-data)
|
||||
```
|
||||
|
||||
### 3. 代码文件
|
||||
|
||||
#### 数据库模型 (ORM)
|
||||
- `/backend/app/models/db/user.py` - 用户模型
|
||||
- `/backend/app/models/db/role.py` - 角色模型
|
||||
- `/backend/app/models/db/menu.py` - 菜单模型
|
||||
|
||||
#### 脚本
|
||||
- `/backend/scripts/seed_admin.py` - 初始化管理员数据的脚本
|
||||
|
||||
#### 依赖
|
||||
- 新增 `bcrypt==5.0.0` 用于密码哈希
|
||||
|
||||
### 4. 执行的脚本
|
||||
|
||||
```bash
|
||||
# 1. 创建数据库表
|
||||
./venv/bin/python scripts/init_db.py
|
||||
|
||||
# 2. 初始化管理员数据
|
||||
./venv/bin/python scripts/seed_admin.py
|
||||
```
|
||||
|
||||
## 数据库表关系
|
||||
|
||||
```
|
||||
users ←→ user_roles ←→ roles
|
||||
↓
|
||||
role_menus
|
||||
↓
|
||||
menus (支持父子关系)
|
||||
```
|
||||
|
||||
## 下一步工作
|
||||
|
||||
根据用户要求,后续需要实现:
|
||||
|
||||
1. **后台管理系统 - 天体数据列表**
|
||||
- API接口:CRUD操作
|
||||
- 前端页面:列表、编辑、新增
|
||||
|
||||
2. **后台管理系统 - 静态数据列表**
|
||||
- API接口:管理星座、星系等静态数据
|
||||
- 前端页面:分类管理
|
||||
|
||||
3. **后台管理系统 - NASA数据下载管理**
|
||||
- API接口:查看下载历史、触发数据更新
|
||||
- 前端页面:数据下载状态监控
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 所有密码使用 bcrypt 加密存储
|
||||
- 菜单系统支持无限层级(通过 parent_id)
|
||||
- 角色-菜单权限通过 role_menus 表控制
|
||||
- 当前已创建管理员用户,可直接登录测试
|
||||
|
|
@ -0,0 +1,239 @@
|
|||
# Cosmo 后端配置说明
|
||||
|
||||
## 配置文件结构
|
||||
|
||||
```
|
||||
backend/
|
||||
├── .env # 实际配置文件(不提交到 Git)
|
||||
├── .env.example # 配置模板(提交到 Git)
|
||||
├── app/
|
||||
│ └── config.py # 配置管理(Pydantic Settings)
|
||||
└── scripts/
|
||||
├── create_db.py # 创建数据库
|
||||
├── init_db.py # 初始化表结构
|
||||
└── setup.sh # 一键初始化脚本
|
||||
```
|
||||
|
||||
## 配置项说明
|
||||
|
||||
### 1. PostgreSQL 数据库配置
|
||||
|
||||
```bash
|
||||
DATABASE_HOST=localhost # 数据库主机
|
||||
DATABASE_PORT=5432 # 数据库端口
|
||||
DATABASE_NAME=cosmo_db # 数据库名称
|
||||
DATABASE_USER=postgres # 数据库用户名
|
||||
DATABASE_PASSWORD=postgres # 数据库密码
|
||||
DATABASE_POOL_SIZE=20 # 连接池大小
|
||||
DATABASE_MAX_OVERFLOW=10 # 连接池最大溢出数
|
||||
```
|
||||
|
||||
**默认配置**:
|
||||
- 账号和密码一致:`postgres/postgres`
|
||||
- 本地数据库:`localhost:5432`
|
||||
- 数据库名称:`cosmo_db`
|
||||
|
||||
### 2. Redis 缓存配置
|
||||
|
||||
```bash
|
||||
REDIS_HOST=localhost # Redis 主机
|
||||
REDIS_PORT=6379 # Redis 端口
|
||||
REDIS_DB=0 # Redis 数据库编号(0-15)
|
||||
REDIS_PASSWORD= # Redis 密码(留空表示无密码)
|
||||
REDIS_MAX_CONNECTIONS=50 # 最大连接数
|
||||
```
|
||||
|
||||
**默认配置**:
|
||||
- 本地 Redis:`localhost:6379`
|
||||
- 无密码认证
|
||||
- 使用 0 号数据库
|
||||
|
||||
### 3. 应用配置
|
||||
|
||||
```bash
|
||||
APP_NAME=Cosmo - Deep Space Explorer
|
||||
API_PREFIX=/api
|
||||
CORS_ORIGINS=["*"] # 开发环境允许所有来源
|
||||
CACHE_TTL_DAYS=3 # NASA API 缓存天数
|
||||
```
|
||||
|
||||
### 4. 文件上传配置
|
||||
|
||||
```bash
|
||||
UPLOAD_DIR=upload # 上传目录
|
||||
MAX_UPLOAD_SIZE=10485760 # 最大文件大小(10MB)
|
||||
```
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 1. 确保服务运行
|
||||
|
||||
确保本机已安装并启动 PostgreSQL 和 Redis:
|
||||
|
||||
```bash
|
||||
# 检查 PostgreSQL
|
||||
psql -U postgres -c "SELECT version();"
|
||||
|
||||
# 检查 Redis
|
||||
redis-cli ping # 应返回 PONG
|
||||
```
|
||||
|
||||
### 2. 配置环境变量
|
||||
|
||||
配置文件 `.env` 已经创建好了,默认配置如下:
|
||||
- PostgreSQL: `postgres/postgres@localhost:5432/cosmo_db`
|
||||
- Redis: `localhost:6379`(无密码)
|
||||
|
||||
如需修改,直接编辑 `backend/.env` 文件。
|
||||
|
||||
### 3. 安装依赖
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 4. 初始化数据库
|
||||
|
||||
```bash
|
||||
# 方式一:使用一键脚本(推荐)
|
||||
chmod +x scripts/setup.sh
|
||||
./scripts/setup.sh
|
||||
|
||||
# 方式二:手动执行
|
||||
python scripts/create_db.py # 创建数据库
|
||||
python scripts/init_db.py # 初始化表结构
|
||||
```
|
||||
|
||||
### 5. 启动服务
|
||||
|
||||
```bash
|
||||
# 开发模式(自动重载)
|
||||
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
|
||||
|
||||
# 或者直接运行
|
||||
python app/main.py
|
||||
```
|
||||
|
||||
访问:
|
||||
- API 文档:http://localhost:8000/docs
|
||||
- 健康检查:http://localhost:8000/health
|
||||
|
||||
## 配置验证
|
||||
|
||||
启动服务后,访问健康检查端点验证配置:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/health
|
||||
```
|
||||
|
||||
正常响应示例:
|
||||
```json
|
||||
{
|
||||
"status": "healthy",
|
||||
"redis": {
|
||||
"connected": true,
|
||||
"used_memory_human": "1.2M",
|
||||
"connected_clients": 2
|
||||
},
|
||||
"database": "connected"
|
||||
}
|
||||
```
|
||||
|
||||
## 常见问题
|
||||
|
||||
### PostgreSQL 连接失败
|
||||
|
||||
**问题**:`Connection refused` 或 `password authentication failed`
|
||||
|
||||
**解决方案**:
|
||||
1. 确保 PostgreSQL 正在运行
|
||||
2. 检查 `.env` 中的账号密码是否正确
|
||||
3. 验证用户权限:
|
||||
```bash
|
||||
psql -U postgres -c "SELECT current_user;"
|
||||
```
|
||||
|
||||
### Redis 连接失败
|
||||
|
||||
**问题**:Redis 连接失败但服务继续运行
|
||||
|
||||
**说明**:
|
||||
- Redis 连接失败时,应用会自动降级为仅使用内存缓存
|
||||
- 不影响核心功能,但会失去跨进程缓存能力
|
||||
- 日志会显示警告:`⚠ Redis connection failed`
|
||||
|
||||
**解决方案**:
|
||||
1. 确保 Redis 正在运行:`redis-cli ping`
|
||||
2. 检查 Redis 端口:`lsof -i :6379`
|
||||
3. 重启 Redis:
|
||||
- macOS: `brew services restart redis`
|
||||
- Linux: `sudo systemctl restart redis`
|
||||
|
||||
### 数据库已存在
|
||||
|
||||
**问题**:`database "cosmo_db" already exists`
|
||||
|
||||
**说明**:这是正常提示,不是错误。
|
||||
|
||||
**解决方案**:
|
||||
- 如果需要重置数据库,先删除再创建:
|
||||
```bash
|
||||
psql -U postgres -c "DROP DATABASE cosmo_db;"
|
||||
python scripts/create_db.py
|
||||
python scripts/init_db.py
|
||||
```
|
||||
|
||||
## 生产环境配置
|
||||
|
||||
生产环境建议修改以下配置:
|
||||
|
||||
```bash
|
||||
# 安全配置
|
||||
CORS_ORIGINS=["https://yourdomain.com"] # 限制跨域来源
|
||||
|
||||
# 数据库优化
|
||||
DATABASE_POOL_SIZE=50 # 增加连接池大小
|
||||
DATABASE_MAX_OVERFLOW=20
|
||||
|
||||
# Redis 密码
|
||||
REDIS_PASSWORD=your_secure_password # 设置 Redis 密码
|
||||
```
|
||||
|
||||
## 配置管理最佳实践
|
||||
|
||||
1. **不要提交 `.env` 文件到 Git**
|
||||
- `.env` 已在 `.gitignore` 中
|
||||
- 只提交 `.env.example` 作为模板
|
||||
|
||||
2. **使用环境变量覆盖**
|
||||
```bash
|
||||
export DATABASE_PASSWORD=new_password
|
||||
python app/main.py
|
||||
```
|
||||
|
||||
3. **多环境配置**
|
||||
```bash
|
||||
.env.development # 开发环境
|
||||
.env.production # 生产环境
|
||||
.env.test # 测试环境
|
||||
```
|
||||
|
||||
## 技术栈
|
||||
|
||||
- **FastAPI** - Web 框架
|
||||
- **SQLAlchemy 2.0** - ORM(异步模式)
|
||||
- **asyncpg** - PostgreSQL 异步驱动
|
||||
- **Redis** - 缓存层
|
||||
- **Pydantic Settings** - 配置管理
|
||||
|
||||
## 数据库设计
|
||||
|
||||
详细的数据库表结构设计请参考 [`DATABASE_SCHEMA.md`](./DATABASE_SCHEMA.md)。
|
||||
|
||||
主要数据表:
|
||||
- `celestial_bodies` - 天体基本信息
|
||||
- `positions` - 位置历史(时间序列)
|
||||
- `resources` - 资源文件管理
|
||||
- `static_data` - 静态天文数据
|
||||
- `nasa_cache` - NASA API 缓存
|
||||
|
|
@ -0,0 +1,450 @@
|
|||
# Cosmo 数据库表结构设计
|
||||
|
||||
## 数据库信息
|
||||
- **数据库类型**: PostgreSQL 15+
|
||||
- **数据库名称**: cosmo_db
|
||||
- **字符集**: UTF8
|
||||
|
||||
---
|
||||
|
||||
## 表结构
|
||||
|
||||
### 1. celestial_bodies - 天体基本信息表
|
||||
|
||||
存储所有天体的基本信息和元数据。
|
||||
|
||||
```sql
|
||||
CREATE TABLE celestial_bodies (
|
||||
id VARCHAR(50) PRIMARY KEY, -- JPL Horizons ID 或自定义ID
|
||||
name VARCHAR(200) NOT NULL, -- 英文名称
|
||||
name_zh VARCHAR(200), -- 中文名称
|
||||
type VARCHAR(50) NOT NULL, -- 天体类型: star, planet, moon, probe, comet, asteroid, etc.
|
||||
description TEXT, -- 描述
|
||||
metadata JSONB, -- 扩展元数据(launch_date, status, mass, radius等)
|
||||
is_active bool, -- 天体有效状态
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
|
||||
CONSTRAINT chk_type CHECK (type IN ('star', 'planet', 'moon', 'probe', 'comet', 'asteroid', 'dwarf_planet', 'satellite'))
|
||||
);
|
||||
|
||||
-- 索引
|
||||
CREATE INDEX idx_celestial_bodies_type ON celestial_bodies(type);
|
||||
CREATE INDEX idx_celestial_bodies_name ON celestial_bodies(name);
|
||||
|
||||
-- 注释
|
||||
COMMENT ON TABLE celestial_bodies IS '天体基本信息表';
|
||||
COMMENT ON COLUMN celestial_bodies.id IS 'JPL Horizons ID(如-31代表Voyager 1)或自定义ID';
|
||||
COMMENT ON COLUMN celestial_bodies.type IS '天体类型:star(恒星), planet(行星), moon(卫星), probe(探测器), comet(彗星), asteroid(小行星)';
|
||||
COMMENT ON COLUMN celestial_bodies.metadata IS 'JSON格式的扩展元数据,例如:{"launch_date": "1977-09-05", "status": "active", "mass": 722, "radius": 2575}';
|
||||
```
|
||||
|
||||
**metadata JSONB字段示例**:
|
||||
```json
|
||||
{
|
||||
"launch_date": "1977-09-05",
|
||||
"status": "active",
|
||||
"mass": 722, // kg
|
||||
"radius": 2575, // km
|
||||
"orbit_period": 365.25, // days
|
||||
"rotation_period": 24, // hours
|
||||
"discovery_date": "1930-02-18",
|
||||
"discoverer": "Clyde Tombaugh"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. positions - 位置历史表(时间序列)
|
||||
|
||||
存储天体的位置历史数据,支持历史查询和轨迹回放。
|
||||
|
||||
```sql
|
||||
CREATE TABLE positions (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
body_id VARCHAR(50) NOT NULL REFERENCES celestial_bodies(id) ON DELETE CASCADE,
|
||||
time TIMESTAMP NOT NULL, -- 位置时间点
|
||||
x DOUBLE PRECISION NOT NULL, -- X坐标(AU,日心坐标系)
|
||||
y DOUBLE PRECISION NOT NULL, -- Y坐标(AU)
|
||||
z DOUBLE PRECISION NOT NULL, -- Z坐标(AU)
|
||||
vx DOUBLE PRECISION, -- X方向速度(可选)
|
||||
vy DOUBLE PRECISION, -- Y方向速度(可选)
|
||||
vz DOUBLE PRECISION, -- Z方向速度(可选)
|
||||
source VARCHAR(50) DEFAULT 'nasa_horizons', -- 数据来源
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
CONSTRAINT chk_source CHECK (source IN ('nasa_horizons', 'calculated', 'user_defined', 'imported'))
|
||||
);
|
||||
|
||||
-- 索引(非常重要,用于高效查询)
|
||||
CREATE INDEX idx_positions_body_time ON positions(body_id, time DESC);
|
||||
CREATE INDEX idx_positions_time ON positions(time);
|
||||
CREATE INDEX idx_positions_body_id ON positions(body_id);
|
||||
|
||||
-- 注释
|
||||
COMMENT ON TABLE positions IS '天体位置历史表(时间序列数据)';
|
||||
COMMENT ON COLUMN positions.body_id IS '关联celestial_bodies表的天体ID';
|
||||
COMMENT ON COLUMN positions.time IS '该位置的观测/计算时间(UTC)';
|
||||
COMMENT ON COLUMN positions.x IS 'X坐标,单位AU(天文单位),日心坐标系';
|
||||
COMMENT ON COLUMN positions.source IS '数据来源:nasa_horizons(NASA API), calculated(计算), user_defined(用户定义), imported(导入)';
|
||||
```
|
||||
|
||||
**使用场景**:
|
||||
- 查询某天体在某时间点的位置
|
||||
- 查询某天体在时间范围内的轨迹
|
||||
- 支持时间旅行功能(回放历史位置)
|
||||
|
||||
---
|
||||
|
||||
### 3. resources - 资源文件管理表
|
||||
|
||||
统一管理纹理、3D模型、图标等静态资源。
|
||||
|
||||
```sql
|
||||
CREATE TABLE resources (
|
||||
id SERIAL PRIMARY KEY,
|
||||
body_id VARCHAR(50) REFERENCES celestial_bodies(id) ON DELETE CASCADE,
|
||||
resource_type VARCHAR(50) NOT NULL, -- 资源类型
|
||||
file_path VARCHAR(500) NOT NULL, -- 相对于upload目录的路径
|
||||
file_size INTEGER, -- 文件大小(bytes)
|
||||
mime_type VARCHAR(100), -- MIME类型
|
||||
metadata JSONB, -- 扩展信息(分辨率、格式等)
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
CONSTRAINT chk_resource_type CHECK (resource_type IN ('texture', 'model', 'icon', 'thumbnail', 'data'))
|
||||
);
|
||||
|
||||
-- 索引
|
||||
CREATE INDEX idx_resources_body_id ON resources(body_id);
|
||||
CREATE INDEX idx_resources_type ON resources(resource_type);
|
||||
|
||||
-- 注释
|
||||
COMMENT ON TABLE resources IS '资源文件管理表(纹理、模型、图标等)';
|
||||
COMMENT ON COLUMN resources.resource_type IS '资源类型:texture(纹理), model(3D模型), icon(图标), thumbnail(缩略图), data(数据文件)';
|
||||
COMMENT ON COLUMN resources.file_path IS '相对路径,例如:textures/planets/earth_2k.jpg';
|
||||
COMMENT ON COLUMN resources.metadata IS 'JSON格式元数据,例如:{"width": 2048, "height": 1024, "format": "jpg"}';
|
||||
```
|
||||
|
||||
**metadata JSONB字段示例**:
|
||||
```json
|
||||
{
|
||||
"width": 2048,
|
||||
"height": 1024,
|
||||
"format": "jpg",
|
||||
"color_space": "sRGB",
|
||||
"model_format": "glb",
|
||||
"polygon_count": 15000
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. static_data - 静态数据表
|
||||
|
||||
存储星座、星系、恒星等不需要动态计算的静态天文数据。
|
||||
|
||||
```sql
|
||||
CREATE TABLE static_data (
|
||||
id SERIAL PRIMARY KEY,
|
||||
category VARCHAR(50) NOT NULL, -- 数据分类
|
||||
name VARCHAR(200) NOT NULL, -- 名称
|
||||
name_zh VARCHAR(200), -- 中文名称
|
||||
data JSONB NOT NULL, -- 完整数据(JSON格式)
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
CONSTRAINT chk_category CHECK (category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster')),
|
||||
CONSTRAINT uq_category_name UNIQUE (category, name)
|
||||
);
|
||||
|
||||
-- 索引
|
||||
CREATE INDEX idx_static_data_category ON static_data(category);
|
||||
CREATE INDEX idx_static_data_name ON static_data(name);
|
||||
CREATE INDEX idx_static_data_data ON static_data USING GIN(data); -- JSONB索引
|
||||
|
||||
-- 注释
|
||||
COMMENT ON TABLE static_data IS '静态天文数据表(星座、星系、恒星等)';
|
||||
COMMENT ON COLUMN static_data.category IS '数据分类:constellation(星座), galaxy(星系), star(恒星), nebula(星云), cluster(星团)';
|
||||
COMMENT ON COLUMN static_data.data IS 'JSON格式的完整数据,结构根据category不同而不同';
|
||||
```
|
||||
|
||||
**data JSONB字段示例**:
|
||||
|
||||
**星座数据**:
|
||||
```json
|
||||
{
|
||||
"stars": [
|
||||
{"name": "Betelgeuse", "ra": 88.79, "dec": 7.41},
|
||||
{"name": "Rigel", "ra": 78.63, "dec": -8.20}
|
||||
],
|
||||
"lines": [[0, 1], [1, 2]],
|
||||
"mythology": "猎户座的神话故事..."
|
||||
}
|
||||
```
|
||||
|
||||
**星系数据**:
|
||||
```json
|
||||
{
|
||||
"type": "spiral",
|
||||
"distance_mly": 2.537,
|
||||
"ra": 10.68,
|
||||
"dec": 41.27,
|
||||
"magnitude": 3.44,
|
||||
"diameter_kly": 220,
|
||||
"color": "#88aaff"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. nasa_cache - NASA API缓存表
|
||||
|
||||
持久化NASA Horizons API的响应结果,减少API调用。
|
||||
|
||||
```sql
|
||||
CREATE TABLE nasa_cache (
|
||||
cache_key VARCHAR(500) PRIMARY KEY, -- 缓存键(body_id:start:end:step)
|
||||
body_id VARCHAR(50),
|
||||
start_time TIMESTAMP, -- 查询起始时间
|
||||
end_time TIMESTAMP, -- 查询结束时间
|
||||
step VARCHAR(10), -- 时间步长(如'1d')
|
||||
data JSONB NOT NULL, -- 完整的API响应数据
|
||||
expires_at TIMESTAMP NOT NULL, -- 过期时间
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
CONSTRAINT chk_time_range CHECK (end_time >= start_time)
|
||||
);
|
||||
|
||||
-- 索引
|
||||
CREATE INDEX idx_nasa_cache_body_id ON nasa_cache(body_id);
|
||||
CREATE INDEX idx_nasa_cache_expires ON nasa_cache(expires_at);
|
||||
CREATE INDEX idx_nasa_cache_time_range ON nasa_cache(body_id, start_time, end_time);
|
||||
|
||||
-- 自动清理过期缓存(可选,需要pg_cron扩展)
|
||||
-- SELECT cron.schedule('clean_expired_cache', '0 0 * * *', 'DELETE FROM nasa_cache WHERE expires_at < NOW()');
|
||||
|
||||
-- 注释
|
||||
COMMENT ON TABLE nasa_cache IS 'NASA Horizons API响应缓存表';
|
||||
COMMENT ON COLUMN nasa_cache.cache_key IS '缓存键格式:{body_id}:{start}:{end}:{step},例如:-31:2025-11-27:2025-11-28:1d';
|
||||
COMMENT ON COLUMN nasa_cache.data IS 'NASA API的完整JSON响应';
|
||||
COMMENT ON COLUMN nasa_cache.expires_at IS '缓存过期时间,过期后自动失效';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 初始化脚本
|
||||
|
||||
### 创建数据库
|
||||
```sql
|
||||
-- 连接到PostgreSQL
|
||||
psql -U postgres
|
||||
|
||||
-- 创建数据库
|
||||
CREATE DATABASE cosmo_db
|
||||
WITH
|
||||
ENCODING = 'UTF8'
|
||||
LC_COLLATE = 'en_US.UTF-8'
|
||||
LC_CTYPE = 'en_US.UTF-8'
|
||||
TEMPLATE = template0;
|
||||
|
||||
-- 连接到新数据库
|
||||
\c cosmo_db
|
||||
|
||||
-- 创建必要的扩展(可选)
|
||||
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; -- UUID生成
|
||||
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- 模糊搜索
|
||||
```
|
||||
|
||||
### 完整建表脚本
|
||||
```sql
|
||||
-- 按依赖顺序创建表
|
||||
|
||||
-- 1. 天体基本信息表
|
||||
CREATE TABLE celestial_bodies (
|
||||
id VARCHAR(50) PRIMARY KEY,
|
||||
name VARCHAR(200) NOT NULL,
|
||||
name_zh VARCHAR(200),
|
||||
type VARCHAR(50) NOT NULL,
|
||||
description TEXT,
|
||||
metadata JSONB,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
CONSTRAINT chk_type CHECK (type IN ('star', 'planet', 'moon', 'probe', 'comet', 'asteroid', 'dwarf_planet', 'satellite'))
|
||||
);
|
||||
CREATE INDEX idx_celestial_bodies_type ON celestial_bodies(type);
|
||||
CREATE INDEX idx_celestial_bodies_name ON celestial_bodies(name);
|
||||
|
||||
-- 2. 位置历史表
|
||||
CREATE TABLE positions (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
body_id VARCHAR(50) NOT NULL REFERENCES celestial_bodies(id) ON DELETE CASCADE,
|
||||
time TIMESTAMP NOT NULL,
|
||||
x DOUBLE PRECISION NOT NULL,
|
||||
y DOUBLE PRECISION NOT NULL,
|
||||
z DOUBLE PRECISION NOT NULL,
|
||||
vx DOUBLE PRECISION,
|
||||
vy DOUBLE PRECISION,
|
||||
vz DOUBLE PRECISION,
|
||||
source VARCHAR(50) DEFAULT 'nasa_horizons',
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
CONSTRAINT chk_source CHECK (source IN ('nasa_horizons', 'calculated', 'user_defined', 'imported'))
|
||||
);
|
||||
CREATE INDEX idx_positions_body_time ON positions(body_id, time DESC);
|
||||
CREATE INDEX idx_positions_time ON positions(time);
|
||||
CREATE INDEX idx_positions_body_id ON positions(body_id);
|
||||
|
||||
-- 3. 资源管理表
|
||||
CREATE TABLE resources (
|
||||
id SERIAL PRIMARY KEY,
|
||||
body_id VARCHAR(50) REFERENCES celestial_bodies(id) ON DELETE CASCADE,
|
||||
resource_type VARCHAR(50) NOT NULL,
|
||||
file_path VARCHAR(500) NOT NULL,
|
||||
file_size INTEGER,
|
||||
mime_type VARCHAR(100),
|
||||
metadata JSONB,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
CONSTRAINT chk_resource_type CHECK (resource_type IN ('texture', 'model', 'icon', 'thumbnail', 'data'))
|
||||
);
|
||||
CREATE INDEX idx_resources_body_id ON resources(body_id);
|
||||
CREATE INDEX idx_resources_type ON resources(resource_type);
|
||||
|
||||
-- 4. 静态数据表
|
||||
CREATE TABLE static_data (
|
||||
id SERIAL PRIMARY KEY,
|
||||
category VARCHAR(50) NOT NULL,
|
||||
name VARCHAR(200) NOT NULL,
|
||||
name_zh VARCHAR(200),
|
||||
data JSONB NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
CONSTRAINT chk_category CHECK (category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster')),
|
||||
CONSTRAINT uq_category_name UNIQUE (category, name)
|
||||
);
|
||||
CREATE INDEX idx_static_data_category ON static_data(category);
|
||||
CREATE INDEX idx_static_data_name ON static_data(name);
|
||||
CREATE INDEX idx_static_data_data ON static_data USING GIN(data);
|
||||
|
||||
-- 5. NASA缓存表
|
||||
CREATE TABLE nasa_cache (
|
||||
cache_key VARCHAR(500) PRIMARY KEY,
|
||||
body_id VARCHAR(50),
|
||||
start_time TIMESTAMP,
|
||||
end_time TIMESTAMP,
|
||||
step VARCHAR(10),
|
||||
data JSONB NOT NULL,
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
CONSTRAINT chk_time_range CHECK (end_time >= start_time)
|
||||
);
|
||||
CREATE INDEX idx_nasa_cache_body_id ON nasa_cache(body_id);
|
||||
CREATE INDEX idx_nasa_cache_expires ON nasa_cache(expires_at);
|
||||
CREATE INDEX idx_nasa_cache_time_range ON nasa_cache(body_id, start_time, end_time);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 数据关系图
|
||||
|
||||
```
|
||||
celestial_bodies (天体)
|
||||
├── positions (1:N) - 天体位置历史
|
||||
├── resources (1:N) - 天体资源文件
|
||||
└── nasa_cache (1:N) - NASA API缓存
|
||||
|
||||
static_data (静态数据) - 独立表,不关联celestial_bodies
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 查询示例
|
||||
|
||||
### 查询某天体的最新位置
|
||||
```sql
|
||||
SELECT b.name, b.name_zh, p.x, p.y, p.z, p.time
|
||||
FROM celestial_bodies b
|
||||
LEFT JOIN LATERAL (
|
||||
SELECT * FROM positions
|
||||
WHERE body_id = b.id
|
||||
ORDER BY time DESC
|
||||
LIMIT 1
|
||||
) p ON true
|
||||
WHERE b.id = '-31';
|
||||
```
|
||||
|
||||
### 查询某天体在时间范围内的轨迹
|
||||
```sql
|
||||
SELECT time, x, y, z
|
||||
FROM positions
|
||||
WHERE body_id = '-31'
|
||||
AND time BETWEEN '2025-01-01' AND '2025-12-31'
|
||||
ORDER BY time;
|
||||
```
|
||||
|
||||
### 查询所有带纹理的行星
|
||||
```sql
|
||||
SELECT b.name, r.file_path
|
||||
FROM celestial_bodies b
|
||||
INNER JOIN resources r ON b.id = r.body_id
|
||||
WHERE b.type = 'planet' AND r.resource_type = 'texture';
|
||||
```
|
||||
|
||||
### 查询所有活跃的探测器
|
||||
```sql
|
||||
SELECT id, name, name_zh, metadata->>'status' as status
|
||||
FROM celestial_bodies
|
||||
WHERE type = 'probe'
|
||||
AND metadata->>'status' = 'active';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 维护建议
|
||||
|
||||
1. **定期清理过期缓存**:
|
||||
```sql
|
||||
DELETE FROM nasa_cache WHERE expires_at < NOW();
|
||||
```
|
||||
|
||||
2. **分析表性能**:
|
||||
```sql
|
||||
ANALYZE celestial_bodies;
|
||||
ANALYZE positions;
|
||||
ANALYZE nasa_cache;
|
||||
```
|
||||
|
||||
3. **重建索引(如果性能下降)**:
|
||||
```sql
|
||||
REINDEX TABLE positions;
|
||||
```
|
||||
|
||||
4. **备份数据库**:
|
||||
```bash
|
||||
pg_dump -U postgres cosmo_db > backup_$(date +%Y%m%d).sql
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 扩展建议
|
||||
|
||||
### 未来可能需要的表
|
||||
|
||||
1. **users** - 用户表(如果需要用户系统)
|
||||
2. **user_favorites** - 用户收藏(收藏的天体)
|
||||
3. **observation_logs** - 观测日志(用户记录)
|
||||
4. **simulation_configs** - 模拟配置(用户自定义场景)
|
||||
|
||||
### 性能优化扩展
|
||||
|
||||
1. **TimescaleDB** - 时间序列优化
|
||||
```sql
|
||||
CREATE EXTENSION IF NOT EXISTS timescaledb;
|
||||
SELECT create_hypertable('positions', 'time');
|
||||
```
|
||||
|
||||
2. **PostGIS** - 空间数据扩展
|
||||
```sql
|
||||
CREATE EXTENSION IF NOT EXISTS postgis;
|
||||
ALTER TABLE positions ADD COLUMN geom geometry(POINTZ, 4326);
|
||||
```
|
||||
|
|
@ -0,0 +1,205 @@
|
|||
"""
|
||||
Authentication API routes
|
||||
"""
|
||||
from datetime import datetime, timedelta
|
||||
from fastapi import APIRouter, HTTPException, Depends, status
|
||||
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select, update
|
||||
from sqlalchemy.orm import selectinload
|
||||
from pydantic import BaseModel
|
||||
|
||||
from app.database import get_db
|
||||
from app.models.db import User, Role, Menu
|
||||
from app.services.auth import verify_password, create_access_token
|
||||
from app.services.auth_deps import get_current_user
|
||||
from app.services.token_service import token_service
|
||||
from app.config import settings
|
||||
|
||||
# HTTP Bearer security
|
||||
security = HTTPBearer()
|
||||
|
||||
|
||||
router = APIRouter(prefix="/auth", tags=["auth"])
|
||||
|
||||
|
||||
# Pydantic models
|
||||
class LoginRequest(BaseModel):
|
||||
username: str
|
||||
password: str
|
||||
|
||||
|
||||
class LoginResponse(BaseModel):
|
||||
access_token: str
|
||||
token_type: str = "bearer"
|
||||
user: dict
|
||||
|
||||
|
||||
class UserInfo(BaseModel):
|
||||
id: int
|
||||
username: str
|
||||
email: str | None
|
||||
full_name: str | None
|
||||
roles: list[str]
|
||||
|
||||
|
||||
class MenuNode(BaseModel):
|
||||
id: int
|
||||
name: str
|
||||
title: str
|
||||
icon: str | None
|
||||
path: str | None
|
||||
component: str | None
|
||||
children: list['MenuNode'] | None = None
|
||||
|
||||
|
||||
@router.post("/login", response_model=LoginResponse)
|
||||
async def login(
|
||||
login_data: LoginRequest,
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Login with username and password
|
||||
|
||||
Returns JWT access token
|
||||
"""
|
||||
# Query user with roles
|
||||
result = await db.execute(
|
||||
select(User)
|
||||
.options(selectinload(User.roles))
|
||||
.where(User.username == login_data.username)
|
||||
)
|
||||
user = result.scalar_one_or_none()
|
||||
|
||||
# Verify user exists and password is correct
|
||||
if not user or not verify_password(login_data.password, user.password_hash):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Incorrect username or password",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
# Check if user is active
|
||||
if not user.is_active:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="Inactive user"
|
||||
)
|
||||
|
||||
# Update last login time
|
||||
await db.execute(
|
||||
update(User)
|
||||
.where(User.id == user.id)
|
||||
.values(last_login_at=datetime.utcnow())
|
||||
)
|
||||
await db.commit()
|
||||
|
||||
# Create access token
|
||||
access_token = create_access_token(
|
||||
data={"sub": str(user.id), "username": user.username}
|
||||
)
|
||||
|
||||
# Save token to Redis
|
||||
await token_service.save_token(access_token, user.id, user.username)
|
||||
|
||||
# Return token and user info
|
||||
return LoginResponse(
|
||||
access_token=access_token,
|
||||
user={
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"email": user.email,
|
||||
"full_name": user.full_name,
|
||||
"roles": [role.name for role in user.roles]
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/me", response_model=UserInfo)
|
||||
async def get_current_user_info(
|
||||
current_user: User = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Get current user information
|
||||
"""
|
||||
return UserInfo(
|
||||
id=current_user.id,
|
||||
username=current_user.username,
|
||||
email=current_user.email,
|
||||
full_name=current_user.full_name,
|
||||
roles=[role.name for role in current_user.roles]
|
||||
)
|
||||
|
||||
|
||||
@router.get("/menus", response_model=list[MenuNode])
|
||||
async def get_user_menus(
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get menus accessible to current user based on their roles
|
||||
"""
|
||||
# Get all role IDs for current user
|
||||
role_ids = [role.id for role in current_user.roles]
|
||||
|
||||
if not role_ids:
|
||||
return []
|
||||
|
||||
# Query menus for user's roles
|
||||
from app.models.db.menu import RoleMenu
|
||||
result = await db.execute(
|
||||
select(Menu)
|
||||
.join(RoleMenu, RoleMenu.menu_id == Menu.id)
|
||||
.where(RoleMenu.role_id.in_(role_ids))
|
||||
.where(Menu.is_active == True)
|
||||
.order_by(Menu.sort_order)
|
||||
.distinct()
|
||||
)
|
||||
menus = result.scalars().all()
|
||||
|
||||
# Build tree structure
|
||||
menu_dict = {}
|
||||
root_menus = []
|
||||
|
||||
for menu in menus:
|
||||
menu_node = MenuNode(
|
||||
id=menu.id,
|
||||
name=menu.name,
|
||||
title=menu.title,
|
||||
icon=menu.icon,
|
||||
path=menu.path,
|
||||
component=menu.component,
|
||||
children=[]
|
||||
)
|
||||
menu_dict[menu.id] = menu_node
|
||||
|
||||
if menu.parent_id is None:
|
||||
root_menus.append(menu_node)
|
||||
|
||||
# Attach children to parents
|
||||
for menu in menus:
|
||||
if menu.parent_id and menu.parent_id in menu_dict:
|
||||
parent = menu_dict[menu.parent_id]
|
||||
if parent.children is None:
|
||||
parent.children = []
|
||||
parent.children.append(menu_dict[menu.id])
|
||||
|
||||
# Remove empty children lists
|
||||
for menu_node in menu_dict.values():
|
||||
if menu_node.children == []:
|
||||
menu_node.children = None
|
||||
|
||||
return root_menus
|
||||
|
||||
|
||||
@router.post("/logout")
|
||||
async def logout(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(security)
|
||||
):
|
||||
"""
|
||||
Logout - revoke current token
|
||||
"""
|
||||
token = credentials.credentials
|
||||
await token_service.revoke_token(token)
|
||||
|
||||
return {"message": "Logged out successfully"}
|
||||
|
|
@ -2,17 +2,29 @@
|
|||
API routes for celestial data
|
||||
"""
|
||||
from datetime import datetime
|
||||
from fastapi import APIRouter, HTTPException, Query
|
||||
from fastapi import APIRouter, HTTPException, Query, Depends, UploadFile, File
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from typing import Optional
|
||||
import logging
|
||||
|
||||
from app.models.celestial import (
|
||||
CelestialDataResponse,
|
||||
BodyInfo,
|
||||
CELESTIAL_BODIES,
|
||||
)
|
||||
from app.models.db import Resource
|
||||
from app.services.horizons import horizons_service
|
||||
from app.services.cache import cache_service
|
||||
from app.services.redis_cache import redis_cache, make_cache_key, get_ttl_seconds
|
||||
from app.services.cache_preheat import preheat_all_caches, preheat_current_positions, preheat_historical_positions
|
||||
from app.services.db_service import (
|
||||
celestial_body_service,
|
||||
position_service,
|
||||
nasa_cache_service,
|
||||
static_data_service,
|
||||
resource_service,
|
||||
)
|
||||
from app.services.orbit_service import orbit_service
|
||||
from app.database import get_db
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -33,74 +45,437 @@ async def get_celestial_positions(
|
|||
"1d",
|
||||
description="Time step (e.g., '1d' for 1 day, '12h' for 12 hours)",
|
||||
),
|
||||
body_ids: Optional[str] = Query(
|
||||
None,
|
||||
description="Comma-separated list of body IDs to fetch (e.g., '999,2000001')",
|
||||
),
|
||||
db: AsyncSession = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get positions of all celestial bodies for a time range
|
||||
|
||||
If only start_time is provided, returns a single snapshot.
|
||||
If both start_time and end_time are provided, returns positions at intervals defined by step.
|
||||
Use body_ids to filter specific bodies (e.g., body_ids=999,2000001 for Pluto and Ceres).
|
||||
"""
|
||||
try:
|
||||
# Parse time strings
|
||||
start_dt = None if start_time is None else datetime.fromisoformat(start_time.replace("Z", "+00:00"))
|
||||
end_dt = None if end_time is None else datetime.fromisoformat(end_time.replace("Z", "+00:00"))
|
||||
|
||||
# Check cache first
|
||||
# Parse body_ids filter
|
||||
body_id_list = None
|
||||
if body_ids:
|
||||
body_id_list = [bid.strip() for bid in body_ids.split(',')]
|
||||
logger.info(f"Filtering for bodies: {body_id_list}")
|
||||
|
||||
# OPTIMIZATION: If no time specified, return most recent positions from database
|
||||
if start_dt is None and end_dt is None:
|
||||
logger.info("No time specified - fetching most recent positions from database")
|
||||
|
||||
# Check Redis cache first (persistent across restarts)
|
||||
start_str = "now"
|
||||
end_str = "now"
|
||||
redis_key = make_cache_key("positions", start_str, end_str, step)
|
||||
redis_cached = await redis_cache.get(redis_key)
|
||||
if redis_cached is not None:
|
||||
logger.info("Cache hit (Redis) for recent positions")
|
||||
return CelestialDataResponse(bodies=redis_cached)
|
||||
|
||||
# Check memory cache (faster but not persistent)
|
||||
cached_data = cache_service.get(start_dt, end_dt, step)
|
||||
if cached_data is not None:
|
||||
logger.info("Cache hit (Memory) for recent positions")
|
||||
return CelestialDataResponse(bodies=cached_data)
|
||||
|
||||
# Get all bodies from database
|
||||
all_bodies = await celestial_body_service.get_all_bodies(db)
|
||||
|
||||
# Filter bodies if body_ids specified
|
||||
if body_id_list:
|
||||
all_bodies = [b for b in all_bodies if b.id in body_id_list]
|
||||
|
||||
# For each body, get the most recent position
|
||||
bodies_data = []
|
||||
from datetime import timedelta
|
||||
now = datetime.utcnow()
|
||||
recent_window = now - timedelta(hours=24) # Look for positions in last 24 hours
|
||||
|
||||
for body in all_bodies:
|
||||
try:
|
||||
# Get most recent position for this body
|
||||
recent_positions = await position_service.get_positions(
|
||||
body_id=body.id,
|
||||
start_time=recent_window,
|
||||
end_time=now,
|
||||
session=db
|
||||
)
|
||||
|
||||
if recent_positions and len(recent_positions) > 0:
|
||||
# Use the most recent position
|
||||
latest_pos = recent_positions[-1]
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"is_active": body.is_active, # Include probe active status
|
||||
"positions": [{
|
||||
"time": latest_pos.time.isoformat(),
|
||||
"x": latest_pos.x,
|
||||
"y": latest_pos.y,
|
||||
"z": latest_pos.z,
|
||||
}]
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
else:
|
||||
# For inactive probes without recent positions, try to get last known position
|
||||
if body.type == 'probe' and body.is_active is False:
|
||||
# Get the most recent position ever recorded
|
||||
all_positions = await position_service.get_positions(
|
||||
body_id=body.id,
|
||||
start_time=None,
|
||||
end_time=None,
|
||||
session=db
|
||||
)
|
||||
|
||||
if all_positions and len(all_positions) > 0:
|
||||
# Use the last known position
|
||||
last_pos = all_positions[-1]
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"is_active": False,
|
||||
"positions": [{
|
||||
"time": last_pos.time.isoformat(),
|
||||
"x": last_pos.x,
|
||||
"y": last_pos.y,
|
||||
"z": last_pos.z,
|
||||
}]
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
else:
|
||||
# No position data at all, still include with empty positions
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"is_active": False,
|
||||
"positions": []
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
logger.info(f"Including inactive probe {body.name} with no position data")
|
||||
except Exception as e:
|
||||
logger.warning(f"Error processing {body.name}: {e}")
|
||||
# For inactive probes, still try to include them
|
||||
if body.type == 'probe' and body.is_active is False:
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"is_active": False,
|
||||
"positions": []
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
continue
|
||||
|
||||
# If we have recent data for all bodies, return it
|
||||
if len(bodies_data) == len(all_bodies):
|
||||
logger.info(f"✅ Returning recent positions from database ({len(bodies_data)} bodies) - FAST!")
|
||||
# Cache in memory
|
||||
cache_service.set(bodies_data, start_dt, end_dt, step)
|
||||
# Cache in Redis for persistence across restarts
|
||||
start_str = start_dt.isoformat() if start_dt else "now"
|
||||
end_str = end_dt.isoformat() if end_dt else "now"
|
||||
redis_key = make_cache_key("positions", start_str, end_str, step)
|
||||
await redis_cache.set(redis_key, bodies_data, get_ttl_seconds("current_positions"))
|
||||
return CelestialDataResponse(bodies=bodies_data)
|
||||
else:
|
||||
logger.info(f"Incomplete recent data ({len(bodies_data)}/{len(all_bodies)} bodies), falling back to Horizons")
|
||||
# Fall through to query Horizons below
|
||||
|
||||
# Check Redis cache first (persistent across restarts)
|
||||
start_str = start_dt.isoformat() if start_dt else "now"
|
||||
end_str = end_dt.isoformat() if end_dt else "now"
|
||||
redis_key = make_cache_key("positions", start_str, end_str, step)
|
||||
redis_cached = await redis_cache.get(redis_key)
|
||||
if redis_cached is not None:
|
||||
logger.info("Cache hit (Redis) for positions")
|
||||
return CelestialDataResponse(bodies=redis_cached)
|
||||
|
||||
# Check memory cache (faster but not persistent)
|
||||
cached_data = cache_service.get(start_dt, end_dt, step)
|
||||
if cached_data is not None:
|
||||
logger.info("Cache hit (Memory) for positions")
|
||||
return CelestialDataResponse(bodies=cached_data)
|
||||
|
||||
# Query Horizons
|
||||
# Check database cache (NASA API responses)
|
||||
# For each body, check if we have cached NASA response
|
||||
all_bodies = await celestial_body_service.get_all_bodies(db)
|
||||
|
||||
# Filter bodies if body_ids specified
|
||||
if body_id_list:
|
||||
all_bodies = [b for b in all_bodies if b.id in body_id_list]
|
||||
|
||||
use_db_cache = True
|
||||
db_cached_bodies = []
|
||||
|
||||
for body in all_bodies:
|
||||
cached_response = await nasa_cache_service.get_cached_response(
|
||||
body.id, start_dt, end_dt, step, db
|
||||
)
|
||||
if cached_response:
|
||||
db_cached_bodies.append({
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"type": body.type,
|
||||
"positions": cached_response.get("positions", [])
|
||||
})
|
||||
else:
|
||||
use_db_cache = False
|
||||
break
|
||||
|
||||
if use_db_cache and db_cached_bodies:
|
||||
logger.info("Cache hit (Database) for positions")
|
||||
# Cache in memory
|
||||
cache_service.set(db_cached_bodies, start_dt, end_dt, step)
|
||||
# Cache in Redis for faster access next time
|
||||
await redis_cache.set(redis_key, db_cached_bodies, get_ttl_seconds("historical_positions"))
|
||||
return CelestialDataResponse(bodies=db_cached_bodies)
|
||||
|
||||
# Check positions table for historical data (prefetched data)
|
||||
# This is faster than querying NASA Horizons for historical queries
|
||||
if start_dt and end_dt:
|
||||
logger.info(f"Checking positions table for historical data: {start_dt} to {end_dt}")
|
||||
all_bodies_positions = []
|
||||
has_complete_data = True
|
||||
|
||||
# Remove timezone info for database query (TIMESTAMP WITHOUT TIME ZONE)
|
||||
start_dt_naive = start_dt.replace(tzinfo=None)
|
||||
end_dt_naive = end_dt.replace(tzinfo=None)
|
||||
|
||||
for body in all_bodies:
|
||||
# Query positions table for this body in the time range
|
||||
positions = await position_service.get_positions(
|
||||
body_id=body.id,
|
||||
start_time=start_dt_naive,
|
||||
end_time=end_dt_naive,
|
||||
session=db
|
||||
)
|
||||
|
||||
if positions and len(positions) > 0:
|
||||
# Convert database positions to API format
|
||||
all_bodies_positions.append({
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"is_active": body.is_active,
|
||||
"positions": [
|
||||
{
|
||||
"time": pos.time.isoformat(),
|
||||
"x": pos.x,
|
||||
"y": pos.y,
|
||||
"z": pos.z,
|
||||
}
|
||||
for pos in positions
|
||||
]
|
||||
})
|
||||
else:
|
||||
# For inactive probes, missing data is expected and acceptable
|
||||
if body.type == 'probe' and body.is_active is False:
|
||||
logger.debug(f"Skipping inactive probe {body.name} with no data for {start_dt_naive}")
|
||||
continue
|
||||
|
||||
# Missing data for active body - need to query Horizons
|
||||
has_complete_data = False
|
||||
break
|
||||
|
||||
if has_complete_data and all_bodies_positions:
|
||||
logger.info(f"Using prefetched historical data from positions table ({len(all_bodies_positions)} bodies)")
|
||||
# Cache in memory
|
||||
cache_service.set(all_bodies_positions, start_dt, end_dt, step)
|
||||
# Cache in Redis for faster access next time
|
||||
await redis_cache.set(redis_key, all_bodies_positions, get_ttl_seconds("historical_positions"))
|
||||
return CelestialDataResponse(bodies=all_bodies_positions)
|
||||
else:
|
||||
logger.info("Incomplete historical data in positions table, falling back to Horizons")
|
||||
|
||||
# Query Horizons (no cache available) - fetch from database + Horizons API
|
||||
logger.info(f"Fetching celestial data from Horizons: start={start_dt}, end={end_dt}, step={step}")
|
||||
bodies = horizons_service.get_all_bodies(start_dt, end_dt, step)
|
||||
|
||||
# Cache the result
|
||||
cache_service.set(bodies, start_dt, end_dt, step)
|
||||
# Get all bodies from database
|
||||
all_bodies = await celestial_body_service.get_all_bodies(db)
|
||||
|
||||
return CelestialDataResponse(bodies=bodies)
|
||||
# Filter bodies if body_ids specified
|
||||
if body_id_list:
|
||||
all_bodies = [b for b in all_bodies if b.id in body_id_list]
|
||||
|
||||
bodies_data = []
|
||||
for body in all_bodies:
|
||||
try:
|
||||
# Special handling for Sun (always at origin)
|
||||
if body.id == "10":
|
||||
sun_start = start_dt if start_dt else datetime.utcnow()
|
||||
sun_end = end_dt if end_dt else sun_start
|
||||
|
||||
positions_list = [{"time": sun_start.isoformat(), "x": 0.0, "y": 0.0, "z": 0.0}]
|
||||
if sun_start != sun_end:
|
||||
positions_list.append({"time": sun_end.isoformat(), "x": 0.0, "y": 0.0, "z": 0.0})
|
||||
|
||||
# Special handling for Cassini (mission ended 2017-09-15)
|
||||
elif body.id == "-82":
|
||||
cassini_date = datetime(2017, 9, 15, 11, 58, 0)
|
||||
pos_data = horizons_service.get_body_positions(body.id, cassini_date, cassini_date, step)
|
||||
positions_list = [
|
||||
{"time": p.time.isoformat(), "x": p.x, "y": p.y, "z": p.z}
|
||||
for p in pos_data
|
||||
]
|
||||
|
||||
else:
|
||||
# Query NASA Horizons for other bodies
|
||||
pos_data = horizons_service.get_body_positions(body.id, start_dt, end_dt, step)
|
||||
positions_list = [
|
||||
{"time": p.time.isoformat(), "x": p.x, "y": p.y, "z": p.z}
|
||||
for p in pos_data
|
||||
]
|
||||
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"positions": positions_list
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get data for {body.name}: {str(e)}")
|
||||
# Continue with other bodies even if one fails
|
||||
continue
|
||||
|
||||
# Save to database cache and position records
|
||||
for body_dict in bodies_data:
|
||||
body_id = body_dict["id"]
|
||||
positions = body_dict.get("positions", [])
|
||||
|
||||
if positions:
|
||||
# Save NASA API response to cache
|
||||
await nasa_cache_service.save_response(
|
||||
body_id=body_id,
|
||||
start_time=start_dt,
|
||||
end_time=end_dt,
|
||||
step=step,
|
||||
response_data={"positions": positions},
|
||||
ttl_days=7,
|
||||
session=db
|
||||
)
|
||||
|
||||
# Save position data to positions table
|
||||
position_records = []
|
||||
for pos in positions:
|
||||
# Parse time and remove timezone for database storage
|
||||
pos_time = pos["time"]
|
||||
if isinstance(pos_time, str):
|
||||
pos_time = datetime.fromisoformat(pos["time"].replace("Z", "+00:00"))
|
||||
# Remove timezone info for TIMESTAMP WITHOUT TIME ZONE
|
||||
pos_time_naive = pos_time.replace(tzinfo=None) if hasattr(pos_time, 'replace') else pos_time
|
||||
|
||||
position_records.append({
|
||||
"time": pos_time_naive,
|
||||
"x": pos["x"],
|
||||
"y": pos["y"],
|
||||
"z": pos["z"],
|
||||
"vx": pos.get("vx"),
|
||||
"vy": pos.get("vy"),
|
||||
"vz": pos.get("vz"),
|
||||
})
|
||||
|
||||
if position_records:
|
||||
await position_service.save_positions(
|
||||
body_id=body_id,
|
||||
positions=position_records,
|
||||
source="nasa_horizons",
|
||||
session=db
|
||||
)
|
||||
logger.info(f"Saved {len(position_records)} positions for {body_id}")
|
||||
|
||||
# Cache in memory
|
||||
cache_service.set(bodies_data, start_dt, end_dt, step)
|
||||
# Cache in Redis for persistence across restarts
|
||||
start_str = start_dt.isoformat() if start_dt else "now"
|
||||
end_str = end_dt.isoformat() if end_dt else "now"
|
||||
redis_key = make_cache_key("positions", start_str, end_str, step)
|
||||
# Use longer TTL for historical data that was fetched from Horizons
|
||||
ttl = get_ttl_seconds("historical_positions") if start_dt and end_dt else get_ttl_seconds("current_positions")
|
||||
await redis_cache.set(redis_key, bodies_data, ttl)
|
||||
logger.info(f"Cached data in Redis with key: {redis_key} (TTL: {ttl}s)")
|
||||
|
||||
return CelestialDataResponse(bodies=bodies_data)
|
||||
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=f"Invalid time format: {str(e)}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching celestial positions: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
raise HTTPException(status_code=500, detail=f"Failed to fetch data: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/info/{body_id}", response_model=BodyInfo)
|
||||
async def get_body_info(body_id: str):
|
||||
async def get_body_info(body_id: str, db: AsyncSession = Depends(get_db)):
|
||||
"""
|
||||
Get detailed information about a specific celestial body
|
||||
|
||||
Args:
|
||||
body_id: JPL Horizons ID (e.g., '-31' for Voyager 1, '399' for Earth)
|
||||
"""
|
||||
if body_id not in CELESTIAL_BODIES:
|
||||
body = await celestial_body_service.get_body_by_id(body_id, db)
|
||||
if not body:
|
||||
raise HTTPException(status_code=404, detail=f"Body {body_id} not found")
|
||||
|
||||
info = CELESTIAL_BODIES[body_id]
|
||||
# Extract extra_data fields
|
||||
extra_data = body.extra_data or {}
|
||||
|
||||
return BodyInfo(
|
||||
id=body_id,
|
||||
name=info["name"],
|
||||
type=info["type"],
|
||||
description=info["description"],
|
||||
launch_date=info.get("launch_date"),
|
||||
status=info.get("status"),
|
||||
id=body.id,
|
||||
name=body.name,
|
||||
type=body.type,
|
||||
description=body.description,
|
||||
launch_date=extra_data.get("launch_date"),
|
||||
status=extra_data.get("status"),
|
||||
)
|
||||
|
||||
|
||||
@router.get("/list")
|
||||
async def list_bodies():
|
||||
async def list_bodies(
|
||||
body_type: Optional[str] = Query(None, description="Filter by body type"),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get a list of all available celestial bodies
|
||||
"""
|
||||
bodies = await celestial_body_service.get_all_bodies(db, body_type)
|
||||
|
||||
bodies_list = []
|
||||
for body_id, info in CELESTIAL_BODIES.items():
|
||||
for body in bodies:
|
||||
bodies_list.append(
|
||||
{
|
||||
"id": body_id,
|
||||
"name": info["name"],
|
||||
"type": info["type"],
|
||||
"description": info["description"],
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
}
|
||||
)
|
||||
return {"bodies": bodies_list}
|
||||
|
|
@ -113,3 +488,412 @@ async def clear_cache():
|
|||
"""
|
||||
cache_service.clear()
|
||||
return {"message": "Cache cleared successfully"}
|
||||
|
||||
|
||||
@router.post("/cache/preheat")
|
||||
async def preheat_cache(
|
||||
mode: str = Query("all", description="Preheat mode: 'all', 'current', 'historical'"),
|
||||
days: int = Query(3, description="Number of days for historical preheat", ge=1, le=30)
|
||||
):
|
||||
"""
|
||||
Manually trigger cache preheat (admin endpoint)
|
||||
|
||||
Args:
|
||||
mode: 'all' (both current and historical), 'current' (current positions only), 'historical' (historical only)
|
||||
days: Number of days to preheat for historical mode (default: 3, max: 30)
|
||||
"""
|
||||
try:
|
||||
if mode == "all":
|
||||
await preheat_all_caches()
|
||||
return {"message": f"Successfully preheated all caches (current + {days} days historical)"}
|
||||
elif mode == "current":
|
||||
await preheat_current_positions()
|
||||
return {"message": "Successfully preheated current positions"}
|
||||
elif mode == "historical":
|
||||
await preheat_historical_positions(days=days)
|
||||
return {"message": f"Successfully preheated {days} days of historical positions"}
|
||||
else:
|
||||
raise HTTPException(status_code=400, detail=f"Invalid mode: {mode}. Use 'all', 'current', or 'historical'")
|
||||
except Exception as e:
|
||||
logger.error(f"Cache preheat failed: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"Preheat failed: {str(e)}")
|
||||
|
||||
|
||||
# === Static Data Endpoints ===
|
||||
|
||||
|
||||
@router.get("/static/categories")
|
||||
async def get_static_categories(db: AsyncSession = Depends(get_db)):
|
||||
"""
|
||||
Get all available static data categories
|
||||
"""
|
||||
categories = await static_data_service.get_all_categories(db)
|
||||
return {"categories": categories}
|
||||
|
||||
|
||||
@router.get("/static/{category}")
|
||||
async def get_static_data(
|
||||
category: str,
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get all static data items for a specific category
|
||||
(e.g., 'star', 'constellation', 'galaxy')
|
||||
"""
|
||||
items = await static_data_service.get_by_category(category, db)
|
||||
|
||||
if not items:
|
||||
raise HTTPException(
|
||||
status_code=404,
|
||||
detail=f"No data found for category '{category}'"
|
||||
)
|
||||
|
||||
result = []
|
||||
for item in items:
|
||||
result.append({
|
||||
"id": item.id,
|
||||
"name": item.name,
|
||||
"name_zh": item.name_zh,
|
||||
"data": item.data
|
||||
})
|
||||
|
||||
return {"category": category, "items": result}
|
||||
|
||||
|
||||
# === Resource Management Endpoints ===
|
||||
|
||||
|
||||
@router.post("/resources/upload")
|
||||
async def upload_resource(
|
||||
body_id: Optional[str] = None,
|
||||
resource_type: str = Query(..., description="Type: texture, model, icon, thumbnail, data"),
|
||||
file: UploadFile = File(...),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Upload a resource file (texture, model, icon, etc.)
|
||||
"""
|
||||
import os
|
||||
import aiofiles
|
||||
from pathlib import Path
|
||||
|
||||
# Validate resource type
|
||||
valid_types = ["texture", "model", "icon", "thumbnail", "data"]
|
||||
if resource_type not in valid_types:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Invalid resource_type. Must be one of: {valid_types}"
|
||||
)
|
||||
|
||||
# Create upload directory structure
|
||||
upload_dir = Path("upload") / resource_type
|
||||
upload_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate unique filename
|
||||
import uuid
|
||||
file_ext = os.path.splitext(file.filename)[1]
|
||||
unique_filename = f"{uuid.uuid4()}{file_ext}"
|
||||
file_path = upload_dir / unique_filename
|
||||
|
||||
# Save file
|
||||
try:
|
||||
async with aiofiles.open(file_path, 'wb') as f:
|
||||
content = await file.read()
|
||||
await f.write(content)
|
||||
|
||||
# Get file size
|
||||
file_size = os.path.getsize(file_path)
|
||||
|
||||
# Create resource record
|
||||
resource = await resource_service.create_resource(
|
||||
{
|
||||
"body_id": body_id,
|
||||
"resource_type": resource_type,
|
||||
"file_path": str(file_path),
|
||||
"file_size": file_size,
|
||||
"mime_type": file.content_type,
|
||||
},
|
||||
db
|
||||
)
|
||||
|
||||
logger.info(f"Uploaded resource: {file_path} ({file_size} bytes)")
|
||||
|
||||
return {
|
||||
"id": resource.id,
|
||||
"resource_type": resource.resource_type,
|
||||
"file_path": resource.file_path,
|
||||
"file_size": resource.file_size,
|
||||
"message": "File uploaded successfully"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
# Clean up file if database operation fails
|
||||
if file_path.exists():
|
||||
os.remove(file_path)
|
||||
logger.error(f"Error uploading file: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"Upload failed: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/resources/{body_id}")
|
||||
async def get_body_resources(
|
||||
body_id: str,
|
||||
resource_type: Optional[str] = Query(None, description="Filter by resource type"),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get all resources associated with a celestial body
|
||||
"""
|
||||
resources = await resource_service.get_resources_by_body(body_id, resource_type, db)
|
||||
|
||||
result = []
|
||||
for resource in resources:
|
||||
result.append({
|
||||
"id": resource.id,
|
||||
"resource_type": resource.resource_type,
|
||||
"file_path": resource.file_path,
|
||||
"file_size": resource.file_size,
|
||||
"mime_type": resource.mime_type,
|
||||
"created_at": resource.created_at.isoformat(),
|
||||
})
|
||||
|
||||
return {"body_id": body_id, "resources": result}
|
||||
|
||||
|
||||
@router.delete("/resources/{resource_id}")
|
||||
async def delete_resource(
|
||||
resource_id: int,
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Delete a resource file and its database record
|
||||
"""
|
||||
import os
|
||||
from sqlalchemy import select
|
||||
|
||||
# Get resource record
|
||||
result = await db.execute(
|
||||
select(Resource).where(Resource.id == resource_id)
|
||||
)
|
||||
resource = result.scalar_one_or_none()
|
||||
|
||||
if not resource:
|
||||
raise HTTPException(status_code=404, detail="Resource not found")
|
||||
|
||||
# Delete file if it exists
|
||||
file_path = resource.file_path
|
||||
if os.path.exists(file_path):
|
||||
try:
|
||||
os.remove(file_path)
|
||||
logger.info(f"Deleted file: {file_path}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to delete file {file_path}: {e}")
|
||||
|
||||
# Delete database record
|
||||
deleted = await resource_service.delete_resource(resource_id, db)
|
||||
|
||||
if deleted:
|
||||
return {"message": "Resource deleted successfully"}
|
||||
else:
|
||||
raise HTTPException(status_code=500, detail="Failed to delete resource")
|
||||
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Orbit Management APIs
|
||||
# ============================================================
|
||||
|
||||
@router.get("/orbits")
|
||||
async def get_orbits(
|
||||
body_type: Optional[str] = Query(None, description="Filter by body type (planet, dwarf_planet)"),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get all precomputed orbital data
|
||||
|
||||
Query parameters:
|
||||
- body_type: Optional filter by celestial body type (planet, dwarf_planet)
|
||||
|
||||
Returns:
|
||||
- List of orbits with points, colors, and metadata
|
||||
"""
|
||||
logger.info(f"Fetching orbits (type filter: {body_type})")
|
||||
|
||||
try:
|
||||
orbits = await orbit_service.get_all_orbits(db, body_type=body_type)
|
||||
|
||||
result = []
|
||||
for orbit in orbits:
|
||||
# Get body info
|
||||
body = await celestial_body_service.get_body_by_id(orbit.body_id, db)
|
||||
|
||||
result.append({
|
||||
"body_id": orbit.body_id,
|
||||
"body_name": body.name if body else "Unknown",
|
||||
"body_name_zh": body.name_zh if body else None,
|
||||
"points": orbit.points,
|
||||
"num_points": orbit.num_points,
|
||||
"period_days": orbit.period_days,
|
||||
"color": orbit.color,
|
||||
"updated_at": orbit.updated_at.isoformat() if orbit.updated_at else None
|
||||
})
|
||||
|
||||
logger.info(f"✅ Returning {len(result)} orbits")
|
||||
return {"orbits": result}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to fetch orbits: {e}")
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post("/admin/orbits/generate")
|
||||
async def generate_orbits(
|
||||
body_ids: Optional[str] = Query(None, description="Comma-separated body IDs to generate. If empty, generates for all planets and dwarf planets"),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Generate orbital data for celestial bodies
|
||||
|
||||
This endpoint queries NASA Horizons API to get complete orbital paths
|
||||
and stores them in the orbits table for fast frontend rendering.
|
||||
|
||||
Query parameters:
|
||||
- body_ids: Optional comma-separated list of body IDs (e.g., "399,999")
|
||||
If not provided, generates orbits for all planets and dwarf planets
|
||||
|
||||
Returns:
|
||||
- List of generated orbits with success/failure status
|
||||
"""
|
||||
logger.info("🌌 Starting orbit generation...")
|
||||
|
||||
# Orbital periods in days (from astronomical data)
|
||||
# Note: NASA Horizons data is limited to ~2199 for most bodies
|
||||
# We use single complete orbits that fit within this range
|
||||
ORBITAL_PERIODS = {
|
||||
# Planets - single complete orbit
|
||||
"199": 88.0, # Mercury
|
||||
"299": 224.7, # Venus
|
||||
"399": 365.25, # Earth
|
||||
"499": 687.0, # Mars
|
||||
"599": 4333.0, # Jupiter (11.86 years)
|
||||
"699": 10759.0, # Saturn (29.46 years)
|
||||
"799": 30687.0, # Uranus (84.01 years)
|
||||
"899": 60190.0, # Neptune (164.79 years)
|
||||
# Dwarf Planets - single complete orbit
|
||||
"999": 90560.0, # Pluto (247.94 years - full orbit)
|
||||
"2000001": 1680.0, # Ceres (4.6 years)
|
||||
"136199": 203500.0, # Eris (557 years - full orbit)
|
||||
"136108": 104000.0, # Haumea (285 years - full orbit)
|
||||
"136472": 112897.0, # Makemake (309 years - full orbit)
|
||||
}
|
||||
|
||||
# Default colors for orbits
|
||||
DEFAULT_COLORS = {
|
||||
"199": "#8C7853", # Mercury - brownish
|
||||
"299": "#FFC649", # Venus - yellowish
|
||||
"399": "#4A90E2", # Earth - blue
|
||||
"499": "#CD5C5C", # Mars - red
|
||||
"599": "#DAA520", # Jupiter - golden
|
||||
"699": "#F4A460", # Saturn - sandy brown
|
||||
"799": "#4FD1C5", # Uranus - cyan
|
||||
"899": "#4169E1", # Neptune - royal blue
|
||||
"999": "#8B7355", # Pluto - brown
|
||||
"2000001": "#9E9E9E", # Ceres - gray
|
||||
"136199": "#E0E0E0", # Eris - light gray
|
||||
"136108": "#D4A574", # Haumea - tan
|
||||
"136472": "#C49A6C", # Makemake - beige
|
||||
}
|
||||
|
||||
try:
|
||||
# Determine which bodies to generate orbits for
|
||||
if body_ids:
|
||||
# Parse comma-separated list
|
||||
target_body_ids = [bid.strip() for bid in body_ids.split(",")]
|
||||
bodies_to_process = []
|
||||
|
||||
for bid in target_body_ids:
|
||||
body = await celestial_body_service.get_body_by_id(bid, db)
|
||||
if body:
|
||||
bodies_to_process.append(body)
|
||||
else:
|
||||
logger.warning(f"Body {bid} not found in database")
|
||||
else:
|
||||
# Get all planets and dwarf planets
|
||||
all_bodies = await celestial_body_service.get_all_bodies(db)
|
||||
bodies_to_process = [
|
||||
b for b in all_bodies
|
||||
if b.type in ["planet", "dwarf_planet"] and b.id in ORBITAL_PERIODS
|
||||
]
|
||||
|
||||
if not bodies_to_process:
|
||||
raise HTTPException(status_code=400, detail="No valid bodies to process")
|
||||
|
||||
logger.info(f"📋 Generating orbits for {len(bodies_to_process)} bodies")
|
||||
|
||||
results = []
|
||||
success_count = 0
|
||||
failure_count = 0
|
||||
|
||||
for body in bodies_to_process:
|
||||
try:
|
||||
period = ORBITAL_PERIODS.get(body.id)
|
||||
if not period:
|
||||
logger.warning(f"No orbital period defined for {body.name}, skipping")
|
||||
continue
|
||||
|
||||
color = DEFAULT_COLORS.get(body.id, "#CCCCCC")
|
||||
|
||||
# Generate orbit
|
||||
orbit = await orbit_service.generate_orbit(
|
||||
body_id=body.id,
|
||||
body_name=body.name_zh or body.name,
|
||||
period_days=period,
|
||||
color=color,
|
||||
session=db,
|
||||
horizons_service=horizons_service
|
||||
)
|
||||
|
||||
results.append({
|
||||
"body_id": body.id,
|
||||
"body_name": body.name_zh or body.name,
|
||||
"status": "success",
|
||||
"num_points": orbit.num_points,
|
||||
"period_days": orbit.period_days
|
||||
})
|
||||
success_count += 1
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to generate orbit for {body.name}: {e}")
|
||||
results.append({
|
||||
"body_id": body.id,
|
||||
"body_name": body.name_zh or body.name,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
})
|
||||
failure_count += 1
|
||||
|
||||
logger.info(f"🎉 Orbit generation complete: {success_count} succeeded, {failure_count} failed")
|
||||
|
||||
return {
|
||||
"message": f"Generated {success_count} orbits ({failure_count} failed)",
|
||||
"results": results
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Orbit generation failed: {e}")
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.delete("/admin/orbits/{body_id}")
|
||||
async def delete_orbit(
|
||||
body_id: str,
|
||||
db: AsyncSession = Depends(get_db)
|
||||
):
|
||||
"""Delete orbit data for a specific body"""
|
||||
logger.info(f"Deleting orbit for body {body_id}")
|
||||
|
||||
deleted = await orbit_service.delete_orbit(body_id, db)
|
||||
|
||||
if deleted:
|
||||
return {"message": f"Orbit for {body_id} deleted successfully"}
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail="Orbit not found")
|
||||
|
|
|
|||
|
|
@ -2,11 +2,13 @@
|
|||
Application configuration
|
||||
"""
|
||||
from pydantic_settings import BaseSettings
|
||||
from pydantic import Field
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
"""Application settings"""
|
||||
|
||||
# Application
|
||||
app_name: str = "Cosmo - Deep Space Explorer"
|
||||
api_prefix: str = "/api"
|
||||
|
||||
|
|
@ -16,6 +18,46 @@ class Settings(BaseSettings):
|
|||
# Cache settings
|
||||
cache_ttl_days: int = 3
|
||||
|
||||
# JWT settings
|
||||
jwt_secret_key: str = "your-secret-key-change-this-in-production"
|
||||
jwt_algorithm: str = "HS256"
|
||||
jwt_access_token_expire_minutes: int = 60 * 24 # 24 hours
|
||||
|
||||
# Database settings (PostgreSQL)
|
||||
database_host: str = "localhost"
|
||||
database_port: int = 5432
|
||||
database_name: str = "cosmo_db"
|
||||
database_user: str = "postgres"
|
||||
database_password: str = "postgres"
|
||||
database_pool_size: int = 20
|
||||
database_max_overflow: int = 10
|
||||
|
||||
# Redis settings
|
||||
redis_host: str = "localhost"
|
||||
redis_port: int = 6379
|
||||
redis_db: int = 0
|
||||
redis_password: str = ""
|
||||
redis_max_connections: int = 50
|
||||
|
||||
# File upload settings
|
||||
upload_dir: str = "upload"
|
||||
max_upload_size: int = 10485760 # 10MB
|
||||
|
||||
@property
|
||||
def database_url(self) -> str:
|
||||
"""Construct database URL for SQLAlchemy"""
|
||||
return (
|
||||
f"postgresql+asyncpg://{self.database_user}:{self.database_password}"
|
||||
f"@{self.database_host}:{self.database_port}/{self.database_name}"
|
||||
)
|
||||
|
||||
@property
|
||||
def redis_url(self) -> str:
|
||||
"""Construct Redis URL"""
|
||||
if self.redis_password:
|
||||
return f"redis://:{self.redis_password}@{self.redis_host}:{self.redis_port}/{self.redis_db}"
|
||||
return f"redis://{self.redis_host}:{self.redis_port}/{self.redis_db}"
|
||||
|
||||
class Config:
|
||||
env_file = ".env"
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,73 @@
|
|||
"""
|
||||
Database connection and session management
|
||||
"""
|
||||
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
|
||||
from sqlalchemy.orm import declarative_base
|
||||
from app.config import settings
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Create async engine
|
||||
engine = create_async_engine(
|
||||
settings.database_url,
|
||||
echo=False, # Set to True for SQL query logging
|
||||
pool_size=settings.database_pool_size,
|
||||
max_overflow=settings.database_max_overflow,
|
||||
pool_pre_ping=True, # Verify connections before using
|
||||
)
|
||||
|
||||
# Create async session factory
|
||||
AsyncSessionLocal = async_sessionmaker(
|
||||
engine,
|
||||
class_=AsyncSession,
|
||||
expire_on_commit=False,
|
||||
autocommit=False,
|
||||
autoflush=False,
|
||||
)
|
||||
|
||||
# Base class for ORM models
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
async def get_db() -> AsyncSession:
|
||||
"""
|
||||
Dependency function for FastAPI to get database sessions
|
||||
|
||||
Usage:
|
||||
@app.get("/items")
|
||||
async def read_items(db: AsyncSession = Depends(get_db)):
|
||||
...
|
||||
"""
|
||||
async with AsyncSessionLocal() as session:
|
||||
try:
|
||||
yield session
|
||||
await session.commit()
|
||||
except Exception:
|
||||
await session.rollback()
|
||||
raise
|
||||
finally:
|
||||
await session.close()
|
||||
|
||||
|
||||
async def init_db():
|
||||
"""Initialize database - create all tables"""
|
||||
from app.models.db import (
|
||||
CelestialBody,
|
||||
Position,
|
||||
Resource,
|
||||
StaticData,
|
||||
NasaCache,
|
||||
)
|
||||
|
||||
async with engine.begin() as conn:
|
||||
# Create all tables
|
||||
await conn.run_sync(Base.metadata.create_all)
|
||||
|
||||
logger.info("Database tables created successfully")
|
||||
|
||||
|
||||
async def close_db():
|
||||
"""Close database connections"""
|
||||
await engine.dispose()
|
||||
logger.info("Database connections closed")
|
||||
|
|
@ -2,12 +2,26 @@
|
|||
Cosmo - Deep Space Explorer Backend API
|
||||
FastAPI application entry point
|
||||
"""
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add backend directory to Python path for direct execution
|
||||
backend_dir = Path(__file__).resolve().parent.parent
|
||||
if str(backend_dir) not in sys.path:
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
import logging
|
||||
from contextlib import asynccontextmanager
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
|
||||
from app.config import settings
|
||||
from app.api.routes import router as celestial_router
|
||||
from app.api.auth import router as auth_router
|
||||
from app.services.redis_cache import redis_cache
|
||||
from app.services.cache_preheat import preheat_all_caches
|
||||
from app.database import close_db
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
|
|
@ -17,11 +31,46 @@ logging.basicConfig(
|
|||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager - startup and shutdown events"""
|
||||
# Startup
|
||||
logger.info("=" * 60)
|
||||
logger.info("Starting Cosmo Backend API...")
|
||||
logger.info("=" * 60)
|
||||
|
||||
# Connect to Redis
|
||||
await redis_cache.connect()
|
||||
|
||||
# Preheat caches (load from database to Redis)
|
||||
await preheat_all_caches()
|
||||
|
||||
logger.info("✓ Application started successfully")
|
||||
logger.info("=" * 60)
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown
|
||||
logger.info("=" * 60)
|
||||
logger.info("Shutting down Cosmo Backend API...")
|
||||
|
||||
# Disconnect Redis
|
||||
await redis_cache.disconnect()
|
||||
|
||||
# Close database connections
|
||||
await close_db()
|
||||
|
||||
logger.info("✓ Application shutdown complete")
|
||||
logger.info("=" * 60)
|
||||
|
||||
|
||||
# Create FastAPI app
|
||||
app = FastAPI(
|
||||
title=settings.app_name,
|
||||
description="Backend API for deep space probe visualization using NASA JPL Horizons data",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan,
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
|
|
@ -35,6 +84,13 @@ app.add_middleware(
|
|||
|
||||
# Include routers
|
||||
app.include_router(celestial_router, prefix=settings.api_prefix)
|
||||
app.include_router(auth_router, prefix=settings.api_prefix)
|
||||
|
||||
# Mount static files for uploaded resources
|
||||
upload_dir = Path(__file__).parent.parent / "upload"
|
||||
upload_dir.mkdir(exist_ok=True)
|
||||
app.mount("/upload", StaticFiles(directory=str(upload_dir)), name="upload")
|
||||
logger.info(f"Static files mounted at /upload -> {upload_dir}")
|
||||
|
||||
|
||||
@app.get("/")
|
||||
|
|
@ -50,8 +106,16 @@ async def root():
|
|||
|
||||
@app.get("/health")
|
||||
async def health():
|
||||
"""Health check endpoint"""
|
||||
return {"status": "healthy"}
|
||||
"""Health check endpoint with service status"""
|
||||
from app.services.redis_cache import redis_cache
|
||||
|
||||
redis_stats = await redis_cache.get_stats()
|
||||
|
||||
return {
|
||||
"status": "healthy",
|
||||
"redis": redis_stats,
|
||||
"database": "connected", # If we got here, database is working
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
|||
|
|
@ -21,11 +21,12 @@ class CelestialBody(BaseModel):
|
|||
id: str = Field(..., description="JPL Horizons ID")
|
||||
name: str = Field(..., description="Display name")
|
||||
name_zh: str | None = Field(None, description="Chinese name")
|
||||
type: Literal["planet", "probe", "star"] = Field(..., description="Body type")
|
||||
type: Literal["planet", "probe", "star", "dwarf_planet", "satellite"] = Field(..., description="Body type")
|
||||
positions: list[Position] = Field(
|
||||
default_factory=list, description="Position history"
|
||||
)
|
||||
description: str | None = Field(None, description="Description")
|
||||
is_active: bool | None = Field(None, description="Active status (for probes: True=active, False=inactive)")
|
||||
|
||||
|
||||
class CelestialDataResponse(BaseModel):
|
||||
|
|
@ -42,7 +43,7 @@ class BodyInfo(BaseModel):
|
|||
|
||||
id: str
|
||||
name: str
|
||||
type: Literal["planet", "probe", "star"]
|
||||
type: Literal["planet", "probe", "star", "dwarf_planet", "satellite"]
|
||||
description: str
|
||||
launch_date: str | None = None
|
||||
status: str | None = None
|
||||
|
|
@ -168,4 +169,35 @@ CELESTIAL_BODIES = {
|
|||
"type": "planet",
|
||||
"description": "海王星,太阳系最外层的行星",
|
||||
},
|
||||
"999": {
|
||||
"name": "Pluto",
|
||||
"name_zh": "冥王星",
|
||||
"type": "dwarf_planet",
|
||||
"description": "冥王星,曾经的第九大行星,现为矮行星",
|
||||
},
|
||||
# Dwarf Planets
|
||||
"2000001": {
|
||||
"name": "Ceres",
|
||||
"name_zh": "谷神星",
|
||||
"type": "dwarf_planet",
|
||||
"description": "谷神星,小行星带中最大的天体,也是唯一的矮行星",
|
||||
},
|
||||
"136199": {
|
||||
"name": "Eris",
|
||||
"name_zh": "阋神星",
|
||||
"type": "dwarf_planet",
|
||||
"description": "阋神星,曾被认为是第十大行星,导致冥王星被降级为矮行星",
|
||||
},
|
||||
"136108": {
|
||||
"name": "Haumea",
|
||||
"name_zh": "妊神星",
|
||||
"type": "dwarf_planet",
|
||||
"description": "妊神星,形状像橄榄球的矮行星,拥有两颗卫星和光环",
|
||||
},
|
||||
"136472": {
|
||||
"name": "Makemake",
|
||||
"name_zh": "鸟神星",
|
||||
"type": "dwarf_planet",
|
||||
"description": "鸟神星,柯伊伯带中第二亮的天体",
|
||||
},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,26 @@
|
|||
"""
|
||||
Database ORM models
|
||||
"""
|
||||
from .celestial_body import CelestialBody
|
||||
from .position import Position
|
||||
from .resource import Resource
|
||||
from .static_data import StaticData
|
||||
from .nasa_cache import NasaCache
|
||||
from .orbit import Orbit
|
||||
from .user import User, user_roles
|
||||
from .role import Role
|
||||
from .menu import Menu, RoleMenu
|
||||
|
||||
__all__ = [
|
||||
"CelestialBody",
|
||||
"Position",
|
||||
"Resource",
|
||||
"StaticData",
|
||||
"NasaCache",
|
||||
"Orbit",
|
||||
"User",
|
||||
"Role",
|
||||
"Menu",
|
||||
"RoleMenu",
|
||||
"user_roles",
|
||||
]
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
"""
|
||||
CelestialBody ORM model
|
||||
"""
|
||||
from sqlalchemy import Column, String, Text, TIMESTAMP, Boolean, CheckConstraint, Index
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class CelestialBody(Base):
|
||||
"""Celestial body (star, planet, probe, etc.)"""
|
||||
|
||||
__tablename__ = "celestial_bodies"
|
||||
|
||||
id = Column(String(50), primary_key=True, comment="JPL Horizons ID or custom ID")
|
||||
name = Column(String(200), nullable=False, comment="English name")
|
||||
name_zh = Column(String(200), nullable=True, comment="Chinese name")
|
||||
type = Column(String(50), nullable=False, comment="Body type")
|
||||
description = Column(Text, nullable=True, comment="Description")
|
||||
is_active = Column(Boolean, nullable=True, comment="Active status for probes (True=active, False=inactive)")
|
||||
extra_data = Column(JSONB, nullable=True, comment="Extended metadata (JSON)")
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
|
||||
|
||||
# Relationships
|
||||
positions = relationship(
|
||||
"Position", back_populates="body", cascade="all, delete-orphan"
|
||||
)
|
||||
resources = relationship(
|
||||
"Resource", back_populates="body", cascade="all, delete-orphan"
|
||||
)
|
||||
|
||||
# Constraints
|
||||
__table_args__ = (
|
||||
CheckConstraint(
|
||||
"type IN ('star', 'planet', 'moon', 'probe', 'comet', 'asteroid', 'dwarf_planet', 'satellite')",
|
||||
name="chk_type",
|
||||
),
|
||||
Index("idx_celestial_bodies_type", "type"),
|
||||
Index("idx_celestial_bodies_name", "name"),
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<CelestialBody(id='{self.id}', name='{self.name}', type='{self.type}')>"
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
"""
|
||||
Menu ORM model
|
||||
"""
|
||||
from sqlalchemy import Column, String, Integer, Boolean, Text, TIMESTAMP, ForeignKey, Index
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class Menu(Base):
|
||||
"""Backend menu items"""
|
||||
|
||||
__tablename__ = "menus"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
parent_id = Column(Integer, ForeignKey('menus.id', ondelete='CASCADE'), nullable=True, comment="Parent menu ID (NULL for root)")
|
||||
name = Column(String(100), nullable=False, comment="Menu name")
|
||||
title = Column(String(100), nullable=False, comment="Display title")
|
||||
icon = Column(String(100), nullable=True, comment="Icon name (e.g., 'settings', 'database')")
|
||||
path = Column(String(255), nullable=True, comment="Route path (e.g., '/admin/celestial-bodies')")
|
||||
component = Column(String(255), nullable=True, comment="Component path (e.g., 'admin/CelestialBodies')")
|
||||
sort_order = Column(Integer, default=0, nullable=False, comment="Display order (ascending)")
|
||||
is_active = Column(Boolean, default=True, nullable=False, comment="Menu active status")
|
||||
description = Column(Text, nullable=True, comment="Menu description")
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
|
||||
|
||||
# Relationships
|
||||
children = relationship("Menu", back_populates="parent", cascade="all, delete-orphan")
|
||||
parent = relationship("Menu", back_populates="children", remote_side=[id])
|
||||
role_menus = relationship("RoleMenu", back_populates="menu", cascade="all, delete-orphan")
|
||||
|
||||
# Indexes
|
||||
__table_args__ = (
|
||||
Index("idx_menus_parent_id", "parent_id"),
|
||||
Index("idx_menus_sort_order", "sort_order"),
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<Menu(id={self.id}, name='{self.name}', path='{self.path}')>"
|
||||
|
||||
|
||||
class RoleMenu(Base):
|
||||
"""Role-Menu relationship (which menus each role can access)"""
|
||||
|
||||
__tablename__ = "role_menus"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
role_id = Column(Integer, ForeignKey('roles.id', ondelete='CASCADE'), nullable=False)
|
||||
menu_id = Column(Integer, ForeignKey('menus.id', ondelete='CASCADE'), nullable=False)
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
|
||||
# Relationships
|
||||
role = relationship("Role", back_populates="menus")
|
||||
menu = relationship("Menu", back_populates="role_menus")
|
||||
|
||||
# Constraints
|
||||
__table_args__ = (
|
||||
Index("idx_role_menus_role_id", "role_id"),
|
||||
Index("idx_role_menus_menu_id", "menu_id"),
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<RoleMenu(role_id={self.role_id}, menu_id={self.menu_id})>"
|
||||
|
|
@ -0,0 +1,42 @@
|
|||
"""
|
||||
NasaCache ORM model - NASA Horizons API cache
|
||||
"""
|
||||
from sqlalchemy import Column, String, TIMESTAMP, CheckConstraint, Index
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from sqlalchemy.sql import func
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class NasaCache(Base):
|
||||
"""NASA Horizons API response cache"""
|
||||
|
||||
__tablename__ = "nasa_cache"
|
||||
|
||||
cache_key = Column(
|
||||
String(500),
|
||||
primary_key=True,
|
||||
comment="Cache key: {body_id}:{start}:{end}:{step}",
|
||||
)
|
||||
body_id = Column(String(50), nullable=True, comment="Body ID")
|
||||
start_time = Column(TIMESTAMP, nullable=True, comment="Query start time")
|
||||
end_time = Column(TIMESTAMP, nullable=True, comment="Query end time")
|
||||
step = Column(String(10), nullable=True, comment="Time step (e.g., '1d')")
|
||||
data = Column(JSONB, nullable=False, comment="Complete API response (JSON)")
|
||||
expires_at = Column(
|
||||
TIMESTAMP, nullable=False, comment="Cache expiration time"
|
||||
)
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
CheckConstraint(
|
||||
"end_time >= start_time",
|
||||
name="chk_time_range",
|
||||
),
|
||||
Index("idx_nasa_cache_body_id", "body_id"),
|
||||
Index("idx_nasa_cache_expires", "expires_at"),
|
||||
Index("idx_nasa_cache_time_range", "body_id", "start_time", "end_time"),
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<NasaCache(cache_key='{self.cache_key}', body_id='{self.body_id}', expires_at='{self.expires_at}')>"
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
"""
|
||||
Database model for orbits table
|
||||
"""
|
||||
from datetime import datetime
|
||||
from sqlalchemy import Column, Integer, String, Float, Text, DateTime, ForeignKey, Index
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class Orbit(Base):
|
||||
"""Orbital path data for celestial bodies"""
|
||||
|
||||
__tablename__ = "orbits"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
body_id = Column(Text, ForeignKey("celestial_bodies.id", ondelete="CASCADE"), nullable=False, unique=True)
|
||||
points = Column(JSONB, nullable=False) # Array of {x, y, z} points
|
||||
num_points = Column(Integer, nullable=False)
|
||||
period_days = Column(Float, nullable=True)
|
||||
color = Column(String(20), nullable=True)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
__table_args__ = (
|
||||
Index('idx_orbits_body_id', 'body_id'),
|
||||
Index('idx_orbits_updated_at', 'updated_at'),
|
||||
)
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
"""
|
||||
Position ORM model - Time series data
|
||||
"""
|
||||
from sqlalchemy import Column, String, TIMESTAMP, BigInteger, Float, ForeignKey, CheckConstraint, Index
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class Position(Base):
|
||||
"""Celestial body position history"""
|
||||
|
||||
__tablename__ = "positions"
|
||||
|
||||
id = Column(BigInteger, primary_key=True, autoincrement=True)
|
||||
body_id = Column(
|
||||
String(50),
|
||||
ForeignKey("celestial_bodies.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
comment="Reference to celestial_bodies.id",
|
||||
)
|
||||
time = Column(TIMESTAMP, nullable=False, comment="Position timestamp (UTC)")
|
||||
x = Column(Float, nullable=False, comment="X coordinate (AU)")
|
||||
y = Column(Float, nullable=False, comment="Y coordinate (AU)")
|
||||
z = Column(Float, nullable=False, comment="Z coordinate (AU)")
|
||||
vx = Column(Float, nullable=True, comment="X velocity (optional)")
|
||||
vy = Column(Float, nullable=True, comment="Y velocity (optional)")
|
||||
vz = Column(Float, nullable=True, comment="Z velocity (optional)")
|
||||
source = Column(
|
||||
String(50),
|
||||
nullable=False,
|
||||
default="nasa_horizons",
|
||||
comment="Data source",
|
||||
)
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
|
||||
# Relationship
|
||||
body = relationship("CelestialBody", back_populates="positions")
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
CheckConstraint(
|
||||
"source IN ('nasa_horizons', 'calculated', 'user_defined', 'imported')",
|
||||
name="chk_source",
|
||||
),
|
||||
Index("idx_positions_body_time", "body_id", "time", postgresql_using="btree"),
|
||||
Index("idx_positions_time", "time"),
|
||||
Index("idx_positions_body_id", "body_id"),
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<Position(body_id='{self.body_id}', time='{self.time}', x={self.x}, y={self.y}, z={self.z})>"
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
"""
|
||||
Resource ORM model - File management
|
||||
"""
|
||||
from sqlalchemy import Column, String, Integer, TIMESTAMP, ForeignKey, CheckConstraint, Index
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class Resource(Base):
|
||||
"""Resource files (textures, models, icons, etc.)"""
|
||||
|
||||
__tablename__ = "resources"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
body_id = Column(
|
||||
String(50),
|
||||
ForeignKey("celestial_bodies.id", ondelete="CASCADE"),
|
||||
nullable=True,
|
||||
comment="Reference to celestial_bodies.id (optional)",
|
||||
)
|
||||
resource_type = Column(
|
||||
String(50), nullable=False, comment="Resource type"
|
||||
)
|
||||
file_path = Column(
|
||||
String(500),
|
||||
nullable=False,
|
||||
comment="Relative path from upload directory",
|
||||
)
|
||||
file_size = Column(Integer, nullable=True, comment="File size in bytes")
|
||||
mime_type = Column(String(100), nullable=True, comment="MIME type")
|
||||
extra_data = Column(JSONB, nullable=True, comment="Extended metadata (JSON)")
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
|
||||
|
||||
# Relationship
|
||||
body = relationship("CelestialBody", back_populates="resources")
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
CheckConstraint(
|
||||
"resource_type IN ('texture', 'model', 'icon', 'thumbnail', 'data')",
|
||||
name="chk_resource_type",
|
||||
),
|
||||
Index("idx_resources_body_id", "body_id"),
|
||||
Index("idx_resources_type", "resource_type"),
|
||||
Index("idx_resources_unique", "body_id", "resource_type", "file_path", unique=True),
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<Resource(id={self.id}, body_id='{self.body_id}', type='{self.resource_type}', path='{self.file_path}')>"
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
"""
|
||||
Role ORM model
|
||||
"""
|
||||
from sqlalchemy import Column, String, Integer, Text, TIMESTAMP, Index
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from app.database import Base
|
||||
from app.models.db.user import user_roles
|
||||
|
||||
|
||||
class Role(Base):
|
||||
"""User role (admin, user, etc.)"""
|
||||
|
||||
__tablename__ = "roles"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
name = Column(String(50), unique=True, nullable=False, index=True, comment="Role name (e.g., 'admin', 'user')")
|
||||
display_name = Column(String(100), nullable=False, comment="Display name")
|
||||
description = Column(Text, nullable=True, comment="Role description")
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
|
||||
|
||||
# Relationships
|
||||
users = relationship("User", secondary=user_roles, back_populates="roles")
|
||||
menus = relationship("RoleMenu", back_populates="role", cascade="all, delete-orphan")
|
||||
|
||||
def __repr__(self):
|
||||
return f"<Role(id={self.id}, name='{self.name}')>"
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
"""
|
||||
StaticData ORM model - Static astronomical data
|
||||
"""
|
||||
from sqlalchemy import Column, String, Integer, TIMESTAMP, CheckConstraint, Index, UniqueConstraint
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from sqlalchemy.sql import func
|
||||
from app.database import Base
|
||||
|
||||
|
||||
class StaticData(Base):
|
||||
"""Static astronomical data (constellations, galaxies, stars, etc.)"""
|
||||
|
||||
__tablename__ = "static_data"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
category = Column(
|
||||
String(50), nullable=False, comment="Data category"
|
||||
)
|
||||
name = Column(String(200), nullable=False, comment="Name")
|
||||
name_zh = Column(String(200), nullable=True, comment="Chinese name")
|
||||
data = Column(JSONB, nullable=False, comment="Complete data (JSON)")
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
CheckConstraint(
|
||||
"category IN ('constellation', 'galaxy', 'star', 'nebula', 'cluster')",
|
||||
name="chk_category",
|
||||
),
|
||||
UniqueConstraint("category", "name", name="uq_category_name"),
|
||||
Index("idx_static_data_category", "category"),
|
||||
Index("idx_static_data_name", "name"),
|
||||
Index("idx_static_data_data", "data", postgresql_using="gin"), # JSONB GIN index
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<StaticData(id={self.id}, category='{self.category}', name='{self.name}')>"
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
"""
|
||||
User ORM model
|
||||
"""
|
||||
from sqlalchemy import Column, String, Integer, Boolean, TIMESTAMP, ForeignKey, Table
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from app.database import Base
|
||||
|
||||
|
||||
# Many-to-many relationship table: users <-> roles
|
||||
user_roles = Table(
|
||||
'user_roles',
|
||||
Base.metadata,
|
||||
Column('user_id', Integer, ForeignKey('users.id', ondelete='CASCADE'), primary_key=True),
|
||||
Column('role_id', Integer, ForeignKey('roles.id', ondelete='CASCADE'), primary_key=True),
|
||||
Column('created_at', TIMESTAMP, server_default=func.now()),
|
||||
)
|
||||
|
||||
|
||||
class User(Base):
|
||||
"""User account"""
|
||||
|
||||
__tablename__ = "users"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
username = Column(String(50), unique=True, nullable=False, index=True, comment="Username (unique)")
|
||||
password_hash = Column(String(255), nullable=False, comment="Password hash (bcrypt)")
|
||||
email = Column(String(255), nullable=True, unique=True, index=True, comment="Email address")
|
||||
full_name = Column(String(100), nullable=True, comment="Full name")
|
||||
is_active = Column(Boolean, default=True, nullable=False, comment="Account active status")
|
||||
created_at = Column(TIMESTAMP, server_default=func.now())
|
||||
updated_at = Column(TIMESTAMP, server_default=func.now(), onupdate=func.now())
|
||||
last_login_at = Column(TIMESTAMP, nullable=True, comment="Last login time")
|
||||
|
||||
# Relationships
|
||||
roles = relationship("Role", secondary=user_roles, back_populates="users")
|
||||
|
||||
def __repr__(self):
|
||||
return f"<User(id={self.id}, username='{self.username}')>"
|
||||
|
|
@ -0,0 +1,42 @@
|
|||
"""
|
||||
JWT authentication service
|
||||
"""
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
from jose import JWTError, jwt
|
||||
import bcrypt
|
||||
from app.config import settings
|
||||
|
||||
|
||||
def verify_password(plain_password: str, hashed_password: str) -> bool:
|
||||
"""Verify a password against a hash"""
|
||||
return bcrypt.checkpw(plain_password.encode('utf-8'), hashed_password.encode('utf-8'))
|
||||
|
||||
|
||||
def hash_password(password: str) -> str:
|
||||
"""Hash a password"""
|
||||
salt = bcrypt.gensalt()
|
||||
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
|
||||
return hashed.decode('utf-8')
|
||||
|
||||
|
||||
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
|
||||
"""Create a JWT access token"""
|
||||
to_encode = data.copy()
|
||||
if expires_delta:
|
||||
expire = datetime.utcnow() + expires_delta
|
||||
else:
|
||||
expire = datetime.utcnow() + timedelta(minutes=settings.jwt_access_token_expire_minutes)
|
||||
|
||||
to_encode.update({"exp": expire})
|
||||
encoded_jwt = jwt.encode(to_encode, settings.jwt_secret_key, algorithm=settings.jwt_algorithm)
|
||||
return encoded_jwt
|
||||
|
||||
|
||||
def decode_access_token(token: str) -> Optional[dict]:
|
||||
"""Decode and verify a JWT access token"""
|
||||
try:
|
||||
payload = jwt.decode(token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm])
|
||||
return payload
|
||||
except JWTError:
|
||||
return None
|
||||
|
|
@ -0,0 +1,99 @@
|
|||
"""
|
||||
Authentication dependencies for FastAPI
|
||||
"""
|
||||
from typing import Optional
|
||||
from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select
|
||||
from sqlalchemy.orm import selectinload
|
||||
|
||||
from app.database import get_db
|
||||
from app.models.db import User, Role
|
||||
from app.services.auth import decode_access_token
|
||||
|
||||
|
||||
# HTTP Bearer token scheme
|
||||
security = HTTPBearer()
|
||||
|
||||
|
||||
async def get_current_user(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(security),
|
||||
db: AsyncSession = Depends(get_db)
|
||||
) -> User:
|
||||
"""
|
||||
Get current authenticated user from JWT token
|
||||
|
||||
Raises:
|
||||
HTTPException: If token is invalid or user not found
|
||||
"""
|
||||
token = credentials.credentials
|
||||
|
||||
# Decode token
|
||||
payload = decode_access_token(token)
|
||||
if payload is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid authentication credentials",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
# Get user ID from token
|
||||
user_id: Optional[int] = payload.get("sub")
|
||||
if user_id is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid authentication credentials",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
# Query user from database with roles
|
||||
result = await db.execute(
|
||||
select(User)
|
||||
.options(selectinload(User.roles))
|
||||
.where(User.id == int(user_id))
|
||||
)
|
||||
user = result.scalar_one_or_none()
|
||||
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="User not found",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
if not user.is_active:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="Inactive user"
|
||||
)
|
||||
|
||||
return user
|
||||
|
||||
|
||||
async def get_current_active_user(
|
||||
current_user: User = Depends(get_current_user)
|
||||
) -> User:
|
||||
"""Get current active user"""
|
||||
return current_user
|
||||
|
||||
|
||||
async def require_admin(
|
||||
current_user: User = Depends(get_current_user)
|
||||
) -> User:
|
||||
"""
|
||||
Require user to have admin role
|
||||
|
||||
Raises:
|
||||
HTTPException: If user is not admin
|
||||
"""
|
||||
# Check if user has admin role
|
||||
is_admin = any(role.name == "admin" for role in current_user.roles)
|
||||
|
||||
if not is_admin:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="Admin privileges required"
|
||||
)
|
||||
|
||||
return current_user
|
||||
|
|
@ -0,0 +1,240 @@
|
|||
"""
|
||||
Cache preheating service
|
||||
Loads data from database to Redis on startup
|
||||
"""
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Dict, Any
|
||||
|
||||
from app.database import get_db
|
||||
from app.services.redis_cache import redis_cache, make_cache_key, get_ttl_seconds
|
||||
from app.services.db_service import celestial_body_service, position_service
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def preheat_current_positions():
|
||||
"""
|
||||
Preheat current positions from database to Redis
|
||||
Loads the most recent single-point position for all bodies
|
||||
Strategy: Get the latest position for each body (should be current hour or most recent)
|
||||
"""
|
||||
logger.info("=" * 60)
|
||||
logger.info("Starting cache preheat: Current positions")
|
||||
logger.info("=" * 60)
|
||||
|
||||
try:
|
||||
async for db in get_db():
|
||||
# Get all celestial bodies
|
||||
all_bodies = await celestial_body_service.get_all_bodies(db)
|
||||
logger.info(f"Found {len(all_bodies)} celestial bodies")
|
||||
|
||||
# Get current time rounded to the hour
|
||||
now = datetime.utcnow()
|
||||
current_hour = now.replace(minute=0, second=0, microsecond=0)
|
||||
|
||||
# Define time window: current hour ± 1 hour
|
||||
start_window = current_hour - timedelta(hours=1)
|
||||
end_window = current_hour + timedelta(hours=1)
|
||||
|
||||
# Collect positions for all bodies
|
||||
bodies_data = []
|
||||
successful_bodies = 0
|
||||
|
||||
for body in all_bodies:
|
||||
try:
|
||||
# Get position closest to current hour
|
||||
recent_positions = await position_service.get_positions(
|
||||
body_id=body.id,
|
||||
start_time=start_window,
|
||||
end_time=end_window,
|
||||
session=db
|
||||
)
|
||||
|
||||
if recent_positions and len(recent_positions) > 0:
|
||||
# Use the position closest to current hour
|
||||
# Find the one with time closest to current_hour
|
||||
closest_pos = min(
|
||||
recent_positions,
|
||||
key=lambda p: abs((p.time - current_hour).total_seconds())
|
||||
)
|
||||
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"positions": [{
|
||||
"time": closest_pos.time.isoformat(),
|
||||
"x": closest_pos.x,
|
||||
"y": closest_pos.y,
|
||||
"z": closest_pos.z,
|
||||
}]
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
successful_bodies += 1
|
||||
logger.debug(f" ✓ Loaded position for {body.name} at {closest_pos.time}")
|
||||
else:
|
||||
logger.warning(f" ⚠ No position found for {body.name} near {current_hour}")
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f" ✗ Failed to load position for {body.name}: {e}")
|
||||
continue
|
||||
|
||||
# Write to Redis if we have data
|
||||
if bodies_data:
|
||||
# Cache key for current hour
|
||||
time_str = current_hour.isoformat()
|
||||
redis_key = make_cache_key("positions", time_str, time_str, "1h")
|
||||
ttl = get_ttl_seconds("current_positions")
|
||||
|
||||
success = await redis_cache.set(redis_key, bodies_data, ttl)
|
||||
|
||||
if success:
|
||||
logger.info(f"✅ Preheated current positions: {successful_bodies}/{len(all_bodies)} bodies")
|
||||
logger.info(f" Time: {current_hour}")
|
||||
logger.info(f" Redis key: {redis_key}")
|
||||
logger.info(f" TTL: {ttl}s ({ttl // 3600}h)")
|
||||
else:
|
||||
logger.error("❌ Failed to write to Redis")
|
||||
else:
|
||||
logger.warning("⚠ No position data available to preheat")
|
||||
|
||||
break # Only process first database session
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Cache preheat failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
logger.info("=" * 60)
|
||||
|
||||
|
||||
async def preheat_historical_positions(days: int = 3):
|
||||
"""
|
||||
Preheat historical positions for timeline mode
|
||||
Strategy: For each day, cache the position at 00:00:00 UTC (single point per day)
|
||||
|
||||
Args:
|
||||
days: Number of days to preheat (default: 3)
|
||||
"""
|
||||
logger.info("=" * 60)
|
||||
logger.info(f"Starting cache preheat: Historical positions ({days} days)")
|
||||
logger.info("=" * 60)
|
||||
|
||||
try:
|
||||
async for db in get_db():
|
||||
# Get all celestial bodies
|
||||
all_bodies = await celestial_body_service.get_all_bodies(db)
|
||||
logger.info(f"Found {len(all_bodies)} celestial bodies")
|
||||
|
||||
# Define time window
|
||||
end_date = datetime.utcnow()
|
||||
start_date = end_date - timedelta(days=days)
|
||||
|
||||
logger.info(f"Time range: {start_date.date()} to {end_date.date()}")
|
||||
|
||||
# Preheat each day separately (single point at 00:00:00 per day)
|
||||
cached_days = 0
|
||||
for day_offset in range(days):
|
||||
# Target time: midnight (00:00:00) of this day
|
||||
target_day = start_date + timedelta(days=day_offset)
|
||||
target_midnight = target_day.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
|
||||
# Search window: ±30 minutes around midnight
|
||||
search_start = target_midnight - timedelta(minutes=30)
|
||||
search_end = target_midnight + timedelta(minutes=30)
|
||||
|
||||
# Collect positions for all bodies for this specific time
|
||||
bodies_data = []
|
||||
successful_bodies = 0
|
||||
|
||||
for body in all_bodies:
|
||||
try:
|
||||
# Query positions near midnight of this day
|
||||
positions = await position_service.get_positions(
|
||||
body_id=body.id,
|
||||
start_time=search_start,
|
||||
end_time=search_end,
|
||||
session=db
|
||||
)
|
||||
|
||||
if positions and len(positions) > 0:
|
||||
# Find the position closest to midnight
|
||||
closest_pos = min(
|
||||
positions,
|
||||
key=lambda p: abs((p.time - target_midnight).total_seconds())
|
||||
)
|
||||
|
||||
body_dict = {
|
||||
"id": body.id,
|
||||
"name": body.name,
|
||||
"name_zh": body.name_zh,
|
||||
"type": body.type,
|
||||
"description": body.description,
|
||||
"positions": [
|
||||
{
|
||||
"time": closest_pos.time.isoformat(),
|
||||
"x": closest_pos.x,
|
||||
"y": closest_pos.y,
|
||||
"z": closest_pos.z,
|
||||
}
|
||||
]
|
||||
}
|
||||
bodies_data.append(body_dict)
|
||||
successful_bodies += 1
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f" ✗ Failed to load {body.name} for {target_midnight.date()}: {e}")
|
||||
continue
|
||||
|
||||
# Write to Redis if we have complete data
|
||||
if bodies_data and successful_bodies == len(all_bodies):
|
||||
# Cache key for this specific midnight timestamp
|
||||
time_str = target_midnight.isoformat()
|
||||
redis_key = make_cache_key("positions", time_str, time_str, "1d")
|
||||
ttl = get_ttl_seconds("historical_positions")
|
||||
|
||||
success = await redis_cache.set(redis_key, bodies_data, ttl)
|
||||
|
||||
if success:
|
||||
cached_days += 1
|
||||
logger.info(f" ✓ Cached {target_midnight.date()} 00:00 UTC: {successful_bodies} bodies")
|
||||
else:
|
||||
logger.warning(f" ✗ Failed to cache {target_midnight.date()}")
|
||||
else:
|
||||
logger.warning(f" ⚠ Incomplete data for {target_midnight.date()}: {successful_bodies}/{len(all_bodies)} bodies")
|
||||
|
||||
logger.info(f"✅ Preheated {cached_days}/{days} days of historical data")
|
||||
|
||||
break # Only process first database session
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Historical cache preheat failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
logger.info("=" * 60)
|
||||
|
||||
|
||||
async def preheat_all_caches():
|
||||
"""
|
||||
Preheat all caches on startup
|
||||
Priority:
|
||||
1. Current positions (most important)
|
||||
2. Historical positions for timeline (3 days)
|
||||
"""
|
||||
logger.info("")
|
||||
logger.info("🔥 Starting full cache preheat...")
|
||||
logger.info("")
|
||||
|
||||
# 1. Preheat current positions
|
||||
await preheat_current_positions()
|
||||
|
||||
# 2. Preheat historical positions (3 days)
|
||||
await preheat_historical_positions(days=3)
|
||||
|
||||
logger.info("")
|
||||
logger.info("🔥 Cache preheat completed!")
|
||||
logger.info("")
|
||||
|
|
@ -0,0 +1,466 @@
|
|||
"""
|
||||
Database service layer for celestial data operations
|
||||
"""
|
||||
from typing import List, Optional, Dict, Any
|
||||
from datetime import datetime
|
||||
from sqlalchemy import select, and_, delete
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
import logging
|
||||
|
||||
from app.models.db import CelestialBody, Position, StaticData, NasaCache, Resource
|
||||
from app.database import AsyncSessionLocal
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CelestialBodyService:
|
||||
"""Service for celestial body operations"""
|
||||
|
||||
@staticmethod
|
||||
async def get_all_bodies(
|
||||
session: Optional[AsyncSession] = None,
|
||||
body_type: Optional[str] = None
|
||||
) -> List[CelestialBody]:
|
||||
"""Get all celestial bodies, optionally filtered by type"""
|
||||
async def _query(s: AsyncSession):
|
||||
query = select(CelestialBody)
|
||||
if body_type:
|
||||
query = query.where(CelestialBody.type == body_type)
|
||||
result = await s.execute(query.order_by(CelestialBody.name))
|
||||
return result.scalars().all()
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
@staticmethod
|
||||
async def get_body_by_id(
|
||||
body_id: str,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> Optional[CelestialBody]:
|
||||
"""Get a celestial body by ID"""
|
||||
async def _query(s: AsyncSession):
|
||||
result = await s.execute(
|
||||
select(CelestialBody).where(CelestialBody.id == body_id)
|
||||
)
|
||||
return result.scalar_one_or_none()
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
@staticmethod
|
||||
async def create_body(
|
||||
body_data: Dict[str, Any],
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> CelestialBody:
|
||||
"""Create a new celestial body"""
|
||||
async def _create(s: AsyncSession):
|
||||
body = CelestialBody(**body_data)
|
||||
s.add(body)
|
||||
await s.commit()
|
||||
await s.refresh(body)
|
||||
return body
|
||||
|
||||
if session:
|
||||
return await _create(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _create(s)
|
||||
|
||||
|
||||
class PositionService:
|
||||
"""Service for position data operations"""
|
||||
|
||||
@staticmethod
|
||||
async def save_positions(
|
||||
body_id: str,
|
||||
positions: List[Dict[str, Any]],
|
||||
source: str = "nasa_horizons",
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> int:
|
||||
"""Save multiple position records for a celestial body (upsert: insert or update if exists)"""
|
||||
async def _save(s: AsyncSession):
|
||||
from sqlalchemy.dialects.postgresql import insert
|
||||
|
||||
count = 0
|
||||
for pos_data in positions:
|
||||
# Use PostgreSQL's INSERT ... ON CONFLICT to handle duplicates
|
||||
stmt = insert(Position).values(
|
||||
body_id=body_id,
|
||||
time=pos_data["time"],
|
||||
x=pos_data["x"],
|
||||
y=pos_data["y"],
|
||||
z=pos_data["z"],
|
||||
vx=pos_data.get("vx"),
|
||||
vy=pos_data.get("vy"),
|
||||
vz=pos_data.get("vz"),
|
||||
source=source
|
||||
)
|
||||
|
||||
# On conflict (body_id, time), update the existing record
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['body_id', 'time'],
|
||||
set_={
|
||||
'x': pos_data["x"],
|
||||
'y': pos_data["y"],
|
||||
'z': pos_data["z"],
|
||||
'vx': pos_data.get("vx"),
|
||||
'vy': pos_data.get("vy"),
|
||||
'vz': pos_data.get("vz"),
|
||||
'source': source
|
||||
}
|
||||
)
|
||||
|
||||
await s.execute(stmt)
|
||||
count += 1
|
||||
|
||||
await s.commit()
|
||||
return count
|
||||
|
||||
if session:
|
||||
return await _save(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _save(s)
|
||||
|
||||
@staticmethod
|
||||
async def get_positions(
|
||||
body_id: str,
|
||||
start_time: Optional[datetime] = None,
|
||||
end_time: Optional[datetime] = None,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> List[Position]:
|
||||
"""Get positions for a celestial body within a time range"""
|
||||
async def _query(s: AsyncSession):
|
||||
query = select(Position).where(Position.body_id == body_id)
|
||||
|
||||
if start_time and end_time:
|
||||
query = query.where(
|
||||
and_(
|
||||
Position.time >= start_time,
|
||||
Position.time <= end_time
|
||||
)
|
||||
)
|
||||
elif start_time:
|
||||
query = query.where(Position.time >= start_time)
|
||||
elif end_time:
|
||||
query = query.where(Position.time <= end_time)
|
||||
|
||||
query = query.order_by(Position.time)
|
||||
result = await s.execute(query)
|
||||
return result.scalars().all()
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
@staticmethod
|
||||
async def get_positions_in_range(
|
||||
body_id: str,
|
||||
start_time: datetime,
|
||||
end_time: datetime,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> List[Position]:
|
||||
"""Alias for get_positions with required time range"""
|
||||
return await PositionService.get_positions(body_id, start_time, end_time, session)
|
||||
|
||||
@staticmethod
|
||||
async def save_position(
|
||||
body_id: str,
|
||||
time: datetime,
|
||||
x: float,
|
||||
y: float,
|
||||
z: float,
|
||||
source: str = "nasa_horizons",
|
||||
vx: Optional[float] = None,
|
||||
vy: Optional[float] = None,
|
||||
vz: Optional[float] = None,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> Position:
|
||||
"""Save a single position record"""
|
||||
async def _save(s: AsyncSession):
|
||||
# Check if position already exists
|
||||
existing = await s.execute(
|
||||
select(Position).where(
|
||||
and_(
|
||||
Position.body_id == body_id,
|
||||
Position.time == time
|
||||
)
|
||||
)
|
||||
)
|
||||
existing_pos = existing.scalar_one_or_none()
|
||||
|
||||
if existing_pos:
|
||||
# Update existing position
|
||||
existing_pos.x = x
|
||||
existing_pos.y = y
|
||||
existing_pos.z = z
|
||||
existing_pos.vx = vx
|
||||
existing_pos.vy = vy
|
||||
existing_pos.vz = vz
|
||||
existing_pos.source = source
|
||||
await s.commit()
|
||||
await s.refresh(existing_pos)
|
||||
return existing_pos
|
||||
else:
|
||||
# Create new position
|
||||
position = Position(
|
||||
body_id=body_id,
|
||||
time=time,
|
||||
x=x,
|
||||
y=y,
|
||||
z=z,
|
||||
vx=vx,
|
||||
vy=vy,
|
||||
vz=vz,
|
||||
source=source
|
||||
)
|
||||
s.add(position)
|
||||
await s.commit()
|
||||
await s.refresh(position)
|
||||
return position
|
||||
|
||||
if session:
|
||||
return await _save(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _save(s)
|
||||
|
||||
@staticmethod
|
||||
async def delete_old_positions(
|
||||
before_time: datetime,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> int:
|
||||
"""Delete position records older than specified time"""
|
||||
async def _delete(s: AsyncSession):
|
||||
result = await s.execute(
|
||||
delete(Position).where(Position.time < before_time)
|
||||
)
|
||||
await s.commit()
|
||||
return result.rowcount
|
||||
|
||||
if session:
|
||||
return await _delete(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _delete(s)
|
||||
|
||||
|
||||
class NasaCacheService:
|
||||
"""Service for NASA API response caching"""
|
||||
|
||||
@staticmethod
|
||||
async def get_cached_response(
|
||||
body_id: str,
|
||||
start_time: Optional[datetime],
|
||||
end_time: Optional[datetime],
|
||||
step: str,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
"""Get cached NASA API response"""
|
||||
async def _query(s: AsyncSession):
|
||||
# Remove timezone info for comparison with database TIMESTAMP WITHOUT TIME ZONE
|
||||
start_naive = start_time.replace(tzinfo=None) if start_time else None
|
||||
end_naive = end_time.replace(tzinfo=None) if end_time else None
|
||||
now_naive = datetime.utcnow()
|
||||
|
||||
result = await s.execute(
|
||||
select(NasaCache).where(
|
||||
and_(
|
||||
NasaCache.body_id == body_id,
|
||||
NasaCache.start_time == start_naive,
|
||||
NasaCache.end_time == end_naive,
|
||||
NasaCache.step == step,
|
||||
NasaCache.expires_at > now_naive
|
||||
)
|
||||
)
|
||||
)
|
||||
cache = result.scalar_one_or_none()
|
||||
return cache.data if cache else None
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
@staticmethod
|
||||
async def save_response(
|
||||
body_id: str,
|
||||
start_time: Optional[datetime],
|
||||
end_time: Optional[datetime],
|
||||
step: str,
|
||||
response_data: Dict[str, Any],
|
||||
ttl_days: int = 7,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> NasaCache:
|
||||
"""Save NASA API response to cache (upsert: insert or update if exists)"""
|
||||
async def _save(s: AsyncSession):
|
||||
from datetime import timedelta
|
||||
from sqlalchemy.dialects.postgresql import insert
|
||||
|
||||
# Remove timezone info for database storage (TIMESTAMP WITHOUT TIME ZONE)
|
||||
start_naive = start_time.replace(tzinfo=None) if start_time else None
|
||||
end_naive = end_time.replace(tzinfo=None) if end_time else None
|
||||
now_naive = datetime.utcnow()
|
||||
|
||||
# Generate cache key
|
||||
start_str = start_time.isoformat() if start_time else "null"
|
||||
end_str = end_time.isoformat() if end_time else "null"
|
||||
cache_key = f"{body_id}:{start_str}:{end_str}:{step}"
|
||||
|
||||
# Use PostgreSQL's INSERT ... ON CONFLICT to handle duplicates atomically
|
||||
stmt = insert(NasaCache).values(
|
||||
cache_key=cache_key,
|
||||
body_id=body_id,
|
||||
start_time=start_naive,
|
||||
end_time=end_naive,
|
||||
step=step,
|
||||
data=response_data,
|
||||
expires_at=now_naive + timedelta(days=ttl_days)
|
||||
)
|
||||
|
||||
# On conflict, update the existing record
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['cache_key'],
|
||||
set_={
|
||||
'data': response_data,
|
||||
'created_at': now_naive,
|
||||
'expires_at': now_naive + timedelta(days=ttl_days)
|
||||
}
|
||||
).returning(NasaCache)
|
||||
|
||||
result = await s.execute(stmt)
|
||||
cache = result.scalar_one()
|
||||
|
||||
await s.commit()
|
||||
await s.refresh(cache)
|
||||
return cache
|
||||
|
||||
if session:
|
||||
return await _save(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _save(s)
|
||||
|
||||
|
||||
class StaticDataService:
|
||||
"""Service for static data operations"""
|
||||
|
||||
@staticmethod
|
||||
async def get_by_category(
|
||||
category: str,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> List[StaticData]:
|
||||
"""Get all static data items for a category"""
|
||||
async def _query(s: AsyncSession):
|
||||
result = await s.execute(
|
||||
select(StaticData)
|
||||
.where(StaticData.category == category)
|
||||
.order_by(StaticData.name)
|
||||
)
|
||||
return result.scalars().all()
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
@staticmethod
|
||||
async def get_all_categories(
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> List[str]:
|
||||
"""Get all available categories"""
|
||||
async def _query(s: AsyncSession):
|
||||
result = await s.execute(
|
||||
select(StaticData.category).distinct()
|
||||
)
|
||||
return [row[0] for row in result]
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
|
||||
class ResourceService:
|
||||
"""Service for resource file management"""
|
||||
|
||||
@staticmethod
|
||||
async def create_resource(
|
||||
resource_data: Dict[str, Any],
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> Resource:
|
||||
"""Create a new resource record"""
|
||||
async def _create(s: AsyncSession):
|
||||
resource = Resource(**resource_data)
|
||||
s.add(resource)
|
||||
await s.commit()
|
||||
await s.refresh(resource)
|
||||
return resource
|
||||
|
||||
if session:
|
||||
return await _create(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _create(s)
|
||||
|
||||
@staticmethod
|
||||
async def get_resources_by_body(
|
||||
body_id: str,
|
||||
resource_type: Optional[str] = None,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> List[Resource]:
|
||||
"""Get all resources for a celestial body"""
|
||||
async def _query(s: AsyncSession):
|
||||
query = select(Resource).where(Resource.body_id == body_id)
|
||||
if resource_type:
|
||||
query = query.where(Resource.resource_type == resource_type)
|
||||
result = await s.execute(query.order_by(Resource.created_at))
|
||||
return result.scalars().all()
|
||||
|
||||
if session:
|
||||
return await _query(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _query(s)
|
||||
|
||||
@staticmethod
|
||||
async def delete_resource(
|
||||
resource_id: int,
|
||||
session: Optional[AsyncSession] = None
|
||||
) -> bool:
|
||||
"""Delete a resource record"""
|
||||
async def _delete(s: AsyncSession):
|
||||
result = await s.execute(
|
||||
select(Resource).where(Resource.id == resource_id)
|
||||
)
|
||||
resource = result.scalar_one_or_none()
|
||||
if resource:
|
||||
await s.delete(resource)
|
||||
await s.commit()
|
||||
return True
|
||||
return False
|
||||
|
||||
if session:
|
||||
return await _delete(session)
|
||||
else:
|
||||
async with AsyncSessionLocal() as s:
|
||||
return await _delete(s)
|
||||
|
||||
|
||||
# Export service instances
|
||||
celestial_body_service = CelestialBodyService()
|
||||
position_service = PositionService()
|
||||
nasa_cache_service = NasaCacheService()
|
||||
static_data_service = StaticDataService()
|
||||
resource_service = ResourceService()
|
||||
|
|
@ -6,7 +6,7 @@ from astroquery.jplhorizons import Horizons
|
|||
from astropy.time import Time
|
||||
import logging
|
||||
|
||||
from app.models.celestial import Position, CelestialBody, CELESTIAL_BODIES
|
||||
from app.models.celestial import Position, CelestialBody
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -44,17 +44,19 @@ class HorizonsService:
|
|||
if end_time is None:
|
||||
end_time = start_time
|
||||
|
||||
# Convert to astropy Time objects
|
||||
start_jd = Time(start_time).jd
|
||||
end_jd = Time(end_time).jd
|
||||
# Convert to astropy Time objects for single point queries
|
||||
# For ranges, use ISO format strings which Horizons prefers
|
||||
|
||||
# Create time range
|
||||
if start_jd == end_jd:
|
||||
epochs = start_jd
|
||||
if start_time == end_time:
|
||||
# Single time point - use JD format
|
||||
epochs = Time(start_time).jd
|
||||
else:
|
||||
# Create range with step - use JD (Julian Date) format for Horizons
|
||||
# JD format is more reliable than ISO strings
|
||||
epochs = {"start": str(start_jd), "stop": str(end_jd), "step": step}
|
||||
# Time range - use ISO format (YYYY-MM-DD HH:MM)
|
||||
# Horizons expects this format for ranges
|
||||
start_str = start_time.strftime('%Y-%m-%d %H:%M')
|
||||
end_str = end_time.strftime('%Y-%m-%d %H:%M')
|
||||
epochs = {"start": start_str, "stop": end_str, "step": step}
|
||||
|
||||
logger.info(f"Querying Horizons for body {body_id} from {start_time} to {end_time}")
|
||||
|
||||
|
|
@ -91,68 +93,6 @@ class HorizonsService:
|
|||
logger.error(f"Error querying Horizons for body {body_id}: {str(e)}")
|
||||
raise
|
||||
|
||||
def get_all_bodies(
|
||||
self,
|
||||
start_time: datetime | None = None,
|
||||
end_time: datetime | None = None,
|
||||
step: str = "1d",
|
||||
) -> list[CelestialBody]:
|
||||
"""
|
||||
Get positions for all predefined celestial bodies
|
||||
|
||||
Args:
|
||||
start_time: Start datetime
|
||||
end_time: End datetime
|
||||
step: Time step
|
||||
|
||||
Returns:
|
||||
List of CelestialBody objects
|
||||
"""
|
||||
bodies = []
|
||||
|
||||
for body_id, info in CELESTIAL_BODIES.items():
|
||||
try:
|
||||
# Special handling for the Sun (it's at origin)
|
||||
if body_id == "10":
|
||||
# Sun is at (0, 0, 0)
|
||||
if start_time is None:
|
||||
start_time = datetime.utcnow()
|
||||
if end_time is None:
|
||||
end_time = start_time
|
||||
|
||||
positions = [
|
||||
Position(time=start_time, x=0.0, y=0.0, z=0.0)
|
||||
]
|
||||
if start_time != end_time:
|
||||
# Add end position as well
|
||||
positions.append(
|
||||
Position(time=end_time, x=0.0, y=0.0, z=0.0)
|
||||
)
|
||||
# Special handling for Cassini (mission ended 2017-09-15)
|
||||
elif body_id == "-82":
|
||||
# Use Cassini's last known position (2017-09-15)
|
||||
cassini_date = datetime(2017, 9, 15, 11, 58, 0)
|
||||
positions = self.get_body_positions(body_id, cassini_date, cassini_date, step)
|
||||
else:
|
||||
# Query other bodies
|
||||
positions = self.get_body_positions(body_id, start_time, end_time, step)
|
||||
|
||||
body = CelestialBody(
|
||||
id=body_id,
|
||||
name=info["name"],
|
||||
name_zh=info.get("name_zh"),
|
||||
type=info["type"],
|
||||
positions=positions,
|
||||
description=info["description"],
|
||||
)
|
||||
bodies.append(body)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get data for {info['name']}: {str(e)}")
|
||||
# Continue with other bodies even if one fails
|
||||
|
||||
return bodies
|
||||
|
||||
|
||||
# Singleton instance
|
||||
horizons_service = HorizonsService()
|
||||
|
|
|
|||
|
|
@ -0,0 +1,189 @@
|
|||
"""
|
||||
Service for managing orbital data
|
||||
"""
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Dict, Optional
|
||||
from sqlalchemy import select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy.dialects.postgresql import insert
|
||||
|
||||
from app.models.db.orbit import Orbit
|
||||
from app.models.db.celestial_body import CelestialBody
|
||||
from app.services.horizons import HorizonsService
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class OrbitService:
|
||||
"""Service for orbit CRUD operations and generation"""
|
||||
|
||||
@staticmethod
|
||||
async def get_orbit(body_id: str, session: AsyncSession) -> Optional[Orbit]:
|
||||
"""Get orbit data for a specific body"""
|
||||
result = await session.execute(
|
||||
select(Orbit).where(Orbit.body_id == body_id)
|
||||
)
|
||||
return result.scalar_one_or_none()
|
||||
|
||||
@staticmethod
|
||||
async def get_all_orbits(
|
||||
session: AsyncSession,
|
||||
body_type: Optional[str] = None
|
||||
) -> List[Orbit]:
|
||||
"""Get all orbits, optionally filtered by body type"""
|
||||
if body_type:
|
||||
# Join with celestial_bodies to filter by type
|
||||
query = (
|
||||
select(Orbit)
|
||||
.join(CelestialBody, Orbit.body_id == CelestialBody.id)
|
||||
.where(CelestialBody.type == body_type)
|
||||
)
|
||||
else:
|
||||
query = select(Orbit)
|
||||
|
||||
result = await session.execute(query)
|
||||
return list(result.scalars().all())
|
||||
|
||||
@staticmethod
|
||||
async def save_orbit(
|
||||
body_id: str,
|
||||
points: List[Dict[str, float]],
|
||||
num_points: int,
|
||||
period_days: Optional[float],
|
||||
color: Optional[str],
|
||||
session: AsyncSession
|
||||
) -> Orbit:
|
||||
"""Save or update orbit data using UPSERT"""
|
||||
stmt = insert(Orbit).values(
|
||||
body_id=body_id,
|
||||
points=points,
|
||||
num_points=num_points,
|
||||
period_days=period_days,
|
||||
color=color,
|
||||
created_at=datetime.utcnow(),
|
||||
updated_at=datetime.utcnow()
|
||||
)
|
||||
|
||||
# On conflict, update all fields
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['body_id'],
|
||||
set_={
|
||||
'points': points,
|
||||
'num_points': num_points,
|
||||
'period_days': period_days,
|
||||
'color': color,
|
||||
'updated_at': datetime.utcnow()
|
||||
}
|
||||
)
|
||||
|
||||
await session.execute(stmt)
|
||||
await session.commit()
|
||||
|
||||
# Fetch and return the saved orbit
|
||||
return await OrbitService.get_orbit(body_id, session)
|
||||
|
||||
@staticmethod
|
||||
async def delete_orbit(body_id: str, session: AsyncSession) -> bool:
|
||||
"""Delete orbit data for a specific body"""
|
||||
orbit = await OrbitService.get_orbit(body_id, session)
|
||||
if orbit:
|
||||
await session.delete(orbit)
|
||||
await session.commit()
|
||||
return True
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
async def generate_orbit(
|
||||
body_id: str,
|
||||
body_name: str,
|
||||
period_days: float,
|
||||
color: Optional[str],
|
||||
session: AsyncSession,
|
||||
horizons_service: HorizonsService
|
||||
) -> Orbit:
|
||||
"""
|
||||
Generate complete orbital data for a celestial body
|
||||
|
||||
Args:
|
||||
body_id: JPL Horizons ID
|
||||
body_name: Display name (for logging)
|
||||
period_days: Orbital period in days
|
||||
color: Hex color for orbit line
|
||||
session: Database session
|
||||
horizons_service: NASA Horizons API service
|
||||
|
||||
Returns:
|
||||
Generated Orbit object
|
||||
"""
|
||||
logger.info(f"🌌 Generating orbit for {body_name} (period: {period_days:.1f} days)")
|
||||
|
||||
# Calculate number of sample points
|
||||
# Use at least 100 points for smooth ellipse
|
||||
# For very long periods, cap at 1000 to avoid excessive data
|
||||
MIN_POINTS = 100
|
||||
MAX_POINTS = 1000
|
||||
|
||||
if period_days < 3650: # < 10 years
|
||||
# For planets: aim for ~1 point per day, minimum 100
|
||||
num_points = max(MIN_POINTS, min(int(period_days), 365))
|
||||
else: # >= 10 years
|
||||
# For outer planets and dwarf planets: monthly sampling
|
||||
num_points = min(int(period_days / 30), MAX_POINTS)
|
||||
|
||||
# Calculate step size in days
|
||||
step_days = max(1, int(period_days / num_points))
|
||||
|
||||
logger.info(f" 📊 Sampling {num_points} points (every {step_days} days)")
|
||||
|
||||
# Query NASA Horizons for complete orbital period
|
||||
# For very long periods (>150 years), start from a historical date
|
||||
# to ensure we can get complete orbit data within NASA's range
|
||||
if period_days > 150 * 365: # More than 150 years
|
||||
# Start from year 1900 for historical data
|
||||
start_time = datetime(1900, 1, 1)
|
||||
end_time = start_time + timedelta(days=period_days)
|
||||
logger.info(f" 📅 Using historical date range (1900-{end_time.year}) for long-period orbit")
|
||||
else:
|
||||
start_time = datetime.utcnow()
|
||||
end_time = start_time + timedelta(days=period_days)
|
||||
|
||||
try:
|
||||
# Get positions from Horizons (synchronous call)
|
||||
positions = horizons_service.get_body_positions(
|
||||
body_id=body_id,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
step=f"{step_days}d"
|
||||
)
|
||||
|
||||
if not positions or len(positions) == 0:
|
||||
raise ValueError(f"No position data returned for {body_name}")
|
||||
|
||||
# Convert Position objects to list of dicts
|
||||
points = [
|
||||
{"x": pos.x, "y": pos.y, "z": pos.z}
|
||||
for pos in positions
|
||||
]
|
||||
|
||||
logger.info(f" ✅ Retrieved {len(points)} orbital points")
|
||||
|
||||
# Save to database
|
||||
orbit = await OrbitService.save_orbit(
|
||||
body_id=body_id,
|
||||
points=points,
|
||||
num_points=len(points),
|
||||
period_days=period_days,
|
||||
color=color,
|
||||
session=session
|
||||
)
|
||||
|
||||
logger.info(f" 💾 Saved orbit for {body_name}")
|
||||
return orbit
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f" ❌ Failed to generate orbit for {body_name}: {e}")
|
||||
raise
|
||||
|
||||
|
||||
orbit_service = OrbitService()
|
||||
|
|
@ -0,0 +1,204 @@
|
|||
"""
|
||||
Redis cache service
|
||||
|
||||
Provides three-layer caching:
|
||||
L1: In-memory cache (process-level, TTL: 10min)
|
||||
L2: Redis cache (shared, TTL: 1h-7days)
|
||||
L3: Database (persistent)
|
||||
"""
|
||||
import redis.asyncio as redis
|
||||
from typing import Any, Optional
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from app.config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RedisCache:
|
||||
"""Redis cache manager"""
|
||||
|
||||
def __init__(self):
|
||||
self.client: Optional[redis.Redis] = None
|
||||
self._connected = False
|
||||
|
||||
async def connect(self):
|
||||
"""Connect to Redis"""
|
||||
try:
|
||||
self.client = redis.from_url(
|
||||
settings.redis_url,
|
||||
encoding="utf-8",
|
||||
decode_responses=True,
|
||||
max_connections=settings.redis_max_connections,
|
||||
)
|
||||
# Test connection
|
||||
await self.client.ping()
|
||||
self._connected = True
|
||||
logger.info(f"✓ Connected to Redis at {settings.redis_host}:{settings.redis_port}")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠ Redis connection failed: {e}")
|
||||
logger.warning("Falling back to in-memory cache only")
|
||||
self._connected = False
|
||||
|
||||
async def disconnect(self):
|
||||
"""Disconnect from Redis"""
|
||||
if self.client:
|
||||
await self.client.close()
|
||||
logger.info("Redis connection closed")
|
||||
|
||||
async def get(self, key: str) -> Optional[Any]:
|
||||
"""Get value from Redis cache"""
|
||||
if not self._connected or not self.client:
|
||||
return None
|
||||
|
||||
try:
|
||||
value = await self.client.get(key)
|
||||
if value:
|
||||
logger.debug(f"Redis cache HIT: {key}")
|
||||
return json.loads(value)
|
||||
logger.debug(f"Redis cache MISS: {key}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Redis get error for key '{key}': {e}")
|
||||
return None
|
||||
|
||||
async def set(
|
||||
self,
|
||||
key: str,
|
||||
value: Any,
|
||||
ttl_seconds: Optional[int] = None,
|
||||
) -> bool:
|
||||
"""Set value in Redis cache with optional TTL"""
|
||||
if not self._connected or not self.client:
|
||||
return False
|
||||
|
||||
try:
|
||||
serialized = json.dumps(value, default=str)
|
||||
if ttl_seconds:
|
||||
await self.client.setex(key, ttl_seconds, serialized)
|
||||
else:
|
||||
await self.client.set(key, serialized)
|
||||
logger.debug(f"Redis cache SET: {key} (TTL: {ttl_seconds}s)")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Redis set error for key '{key}': {e}")
|
||||
return False
|
||||
|
||||
async def delete(self, key: str) -> bool:
|
||||
"""Delete key from Redis cache"""
|
||||
if not self._connected or not self.client:
|
||||
return False
|
||||
|
||||
try:
|
||||
await self.client.delete(key)
|
||||
logger.debug(f"Redis cache DELETE: {key}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Redis delete error for key '{key}': {e}")
|
||||
return False
|
||||
|
||||
async def exists(self, key: str) -> bool:
|
||||
"""Check if key exists in Redis cache"""
|
||||
if not self._connected or not self.client:
|
||||
return False
|
||||
|
||||
try:
|
||||
result = await self.client.exists(key)
|
||||
return result > 0
|
||||
except Exception as e:
|
||||
logger.error(f"Redis exists error for key '{key}': {e}")
|
||||
return False
|
||||
|
||||
async def clear_pattern(self, pattern: str) -> int:
|
||||
"""Clear all keys matching pattern"""
|
||||
if not self._connected or not self.client:
|
||||
return 0
|
||||
|
||||
try:
|
||||
keys = []
|
||||
async for key in self.client.scan_iter(match=pattern):
|
||||
keys.append(key)
|
||||
|
||||
if keys:
|
||||
deleted = await self.client.delete(*keys)
|
||||
logger.info(f"Cleared {deleted} keys matching pattern '{pattern}'")
|
||||
return deleted
|
||||
return 0
|
||||
except Exception as e:
|
||||
logger.error(f"Redis clear_pattern error for pattern '{pattern}': {e}")
|
||||
return 0
|
||||
|
||||
async def get_stats(self) -> dict:
|
||||
"""Get Redis statistics"""
|
||||
if not self._connected or not self.client:
|
||||
return {"connected": False}
|
||||
|
||||
try:
|
||||
info = await self.client.info()
|
||||
return {
|
||||
"connected": True,
|
||||
"used_memory_human": info.get("used_memory_human"),
|
||||
"connected_clients": info.get("connected_clients"),
|
||||
"total_commands_processed": info.get("total_commands_processed"),
|
||||
"keyspace_hits": info.get("keyspace_hits"),
|
||||
"keyspace_misses": info.get("keyspace_misses"),
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Redis get_stats error: {e}")
|
||||
return {"connected": False, "error": str(e)}
|
||||
|
||||
|
||||
# Singleton instance
|
||||
redis_cache = RedisCache()
|
||||
|
||||
|
||||
# Helper functions for common cache operations
|
||||
|
||||
def make_cache_key(prefix: str, *args) -> str:
|
||||
"""Create standardized cache key"""
|
||||
parts = [str(arg) for arg in args if arg is not None]
|
||||
return f"{prefix}:{':'.join(parts)}"
|
||||
|
||||
|
||||
def get_ttl_seconds(cache_type: str) -> int:
|
||||
"""Get TTL in seconds based on cache type"""
|
||||
ttl_map = {
|
||||
"current_positions": 3600, # 1 hour
|
||||
"historical_positions": 86400 * 7, # 7 days
|
||||
"static_data": 86400 * 30, # 30 days
|
||||
"nasa_api_response": 86400 * 3, # 3 days (from settings)
|
||||
}
|
||||
return ttl_map.get(cache_type, 3600) # Default 1 hour
|
||||
|
||||
|
||||
async def cache_nasa_response(
|
||||
body_id: str,
|
||||
start_time: Optional[datetime],
|
||||
end_time: Optional[datetime],
|
||||
step: str,
|
||||
data: Any,
|
||||
) -> bool:
|
||||
"""Cache NASA Horizons API response"""
|
||||
# Create cache key
|
||||
start_str = start_time.isoformat() if start_time else "now"
|
||||
end_str = end_time.isoformat() if end_time else "now"
|
||||
cache_key = make_cache_key("nasa", body_id, start_str, end_str, step)
|
||||
|
||||
# Cache in Redis
|
||||
ttl = get_ttl_seconds("nasa_api_response")
|
||||
return await redis_cache.set(cache_key, data, ttl)
|
||||
|
||||
|
||||
async def get_cached_nasa_response(
|
||||
body_id: str,
|
||||
start_time: Optional[datetime],
|
||||
end_time: Optional[datetime],
|
||||
step: str,
|
||||
) -> Optional[Any]:
|
||||
"""Get cached NASA Horizons API response"""
|
||||
start_str = start_time.isoformat() if start_time else "now"
|
||||
end_str = end_time.isoformat() if end_time else "now"
|
||||
cache_key = make_cache_key("nasa", body_id, start_str, end_str, step)
|
||||
|
||||
return await redis_cache.get(cache_key)
|
||||
|
|
@ -0,0 +1,136 @@
|
|||
"""
|
||||
Token management service using Redis
|
||||
"""
|
||||
from typing import Optional
|
||||
from datetime import timedelta
|
||||
from app.services.redis_cache import redis_cache
|
||||
from app.config import settings
|
||||
import json
|
||||
|
||||
|
||||
class TokenService:
|
||||
"""Token management with Redis"""
|
||||
|
||||
def __init__(self):
|
||||
self.prefix = "token:"
|
||||
self.blacklist_prefix = "token:blacklist:"
|
||||
self.user_tokens_prefix = "user:tokens:"
|
||||
|
||||
async def save_token(self, token: str, user_id: int, username: str) -> None:
|
||||
"""
|
||||
Save token to Redis with user info
|
||||
|
||||
Args:
|
||||
token: JWT access token
|
||||
user_id: User ID
|
||||
username: Username
|
||||
"""
|
||||
# Save token with user info
|
||||
token_data = {
|
||||
"user_id": user_id,
|
||||
"username": username
|
||||
}
|
||||
|
||||
# Set token in Redis with TTL (24 hours)
|
||||
ttl_seconds = settings.jwt_access_token_expire_minutes * 60
|
||||
await redis_cache.set(
|
||||
f"{self.prefix}{token}",
|
||||
json.dumps(token_data),
|
||||
expire=ttl_seconds
|
||||
)
|
||||
|
||||
# Track user's active tokens (for multi-device support)
|
||||
user_tokens_key = f"{self.user_tokens_prefix}{user_id}"
|
||||
# Add token to user's token set
|
||||
if redis_cache.redis:
|
||||
await redis_cache.redis.sadd(user_tokens_key, token)
|
||||
await redis_cache.redis.expire(user_tokens_key, ttl_seconds)
|
||||
|
||||
async def get_token_data(self, token: str) -> Optional[dict]:
|
||||
"""
|
||||
Get token data from Redis
|
||||
|
||||
Args:
|
||||
token: JWT access token
|
||||
|
||||
Returns:
|
||||
Token data dict or None if not found/expired
|
||||
"""
|
||||
# Check if token is blacklisted
|
||||
is_blacklisted = await redis_cache.exists(f"{self.blacklist_prefix}{token}")
|
||||
if is_blacklisted:
|
||||
return None
|
||||
|
||||
# Get token data
|
||||
data = await redis_cache.get(f"{self.prefix}{token}")
|
||||
if data:
|
||||
return json.loads(data)
|
||||
return None
|
||||
|
||||
async def revoke_token(self, token: str) -> None:
|
||||
"""
|
||||
Revoke a token (logout)
|
||||
|
||||
Args:
|
||||
token: JWT access token
|
||||
"""
|
||||
# Get token data first to know user_id
|
||||
token_data = await self.get_token_data(token)
|
||||
|
||||
# Add to blacklist
|
||||
ttl_seconds = settings.jwt_access_token_expire_minutes * 60
|
||||
await redis_cache.set(
|
||||
f"{self.blacklist_prefix}{token}",
|
||||
"1",
|
||||
expire=ttl_seconds
|
||||
)
|
||||
|
||||
# Delete from active tokens
|
||||
await redis_cache.delete(f"{self.prefix}{token}")
|
||||
|
||||
# Remove from user's token set
|
||||
if token_data and redis_cache.redis:
|
||||
user_id = token_data.get("user_id")
|
||||
if user_id:
|
||||
await redis_cache.redis.srem(
|
||||
f"{self.user_tokens_prefix}{user_id}",
|
||||
token
|
||||
)
|
||||
|
||||
async def revoke_all_user_tokens(self, user_id: int) -> None:
|
||||
"""
|
||||
Revoke all tokens for a user (logout from all devices)
|
||||
|
||||
Args:
|
||||
user_id: User ID
|
||||
"""
|
||||
if not redis_cache.redis:
|
||||
return
|
||||
|
||||
# Get all user's tokens
|
||||
user_tokens_key = f"{self.user_tokens_prefix}{user_id}"
|
||||
tokens = await redis_cache.redis.smembers(user_tokens_key)
|
||||
|
||||
# Revoke each token
|
||||
for token in tokens:
|
||||
await self.revoke_token(token.decode() if isinstance(token, bytes) else token)
|
||||
|
||||
# Clear user's token set
|
||||
await redis_cache.delete(user_tokens_key)
|
||||
|
||||
async def is_token_valid(self, token: str) -> bool:
|
||||
"""
|
||||
Check if token is valid (not blacklisted and exists in Redis)
|
||||
|
||||
Args:
|
||||
token: JWT access token
|
||||
|
||||
Returns:
|
||||
True if valid, False otherwise
|
||||
"""
|
||||
token_data = await self.get_token_data(token)
|
||||
return token_data is not None
|
||||
|
||||
|
||||
# Global token service instance
|
||||
token_service = TokenService()
|
||||
|
|
@ -6,3 +6,25 @@ pydantic==2.5.0
|
|||
pydantic-settings==2.1.0
|
||||
python-dotenv==1.0.0
|
||||
httpx==0.25.2
|
||||
|
||||
# Database
|
||||
sqlalchemy==2.0.23
|
||||
asyncpg==0.29.0
|
||||
alembic==1.13.0
|
||||
greenlet==3.0.1
|
||||
|
||||
# Redis
|
||||
redis==5.0.1
|
||||
|
||||
# Authentication
|
||||
bcrypt==5.0.0
|
||||
python-jose[cryptography]==3.5.0
|
||||
passlib[bcrypt]==1.7.4
|
||||
|
||||
# File handling
|
||||
python-multipart==0.0.6
|
||||
aiofiles==23.2.1
|
||||
Pillow==10.1.0
|
||||
|
||||
# Date handling
|
||||
python-dateutil==2.8.2
|
||||
|
|
|
|||
|
|
@ -0,0 +1,77 @@
|
|||
"""
|
||||
Add Pluto to celestial bodies database
|
||||
"""
|
||||
import asyncio
|
||||
from sqlalchemy.dialects.postgresql import insert as pg_insert
|
||||
from app.database import get_db
|
||||
from app.models.db.celestial_body import CelestialBody
|
||||
from app.models.db.resource import Resource
|
||||
|
||||
|
||||
async def add_pluto():
|
||||
"""Add Pluto to the database"""
|
||||
async for session in get_db():
|
||||
try:
|
||||
# Add Pluto as a celestial body
|
||||
print("📍 Adding Pluto to celestial_bodies table...")
|
||||
stmt = pg_insert(CelestialBody).values(
|
||||
id="999",
|
||||
name="Pluto",
|
||||
name_zh="冥王星",
|
||||
type="planet",
|
||||
description="冥王星,曾经的第九大行星,现为矮行星"
|
||||
)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['id'],
|
||||
set_={
|
||||
'name': "Pluto",
|
||||
'name_zh': "冥王星",
|
||||
'type': "planet",
|
||||
'description': "冥王星,曾经的第九大行星,现为矮行星"
|
||||
}
|
||||
)
|
||||
await session.execute(stmt)
|
||||
await session.commit()
|
||||
print("✅ Pluto added successfully!")
|
||||
|
||||
# Check if Pluto texture exists
|
||||
import os
|
||||
texture_path = "upload/texture/2k_pluto.jpg"
|
||||
if os.path.exists(texture_path):
|
||||
print(f"\n📸 Found Pluto texture: {texture_path}")
|
||||
file_size = os.path.getsize(texture_path)
|
||||
|
||||
# Add texture resource
|
||||
print("📦 Adding Pluto texture to resources table...")
|
||||
stmt = pg_insert(Resource).values(
|
||||
body_id="999",
|
||||
resource_type="texture",
|
||||
file_path="texture/2k_pluto.jpg",
|
||||
file_size=file_size,
|
||||
mime_type="image/jpeg",
|
||||
extra_data=None
|
||||
)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['body_id', 'resource_type', 'file_path'],
|
||||
set_={
|
||||
'file_size': file_size,
|
||||
'mime_type': "image/jpeg",
|
||||
}
|
||||
)
|
||||
await session.execute(stmt)
|
||||
await session.commit()
|
||||
print(f"✅ Pluto texture resource added ({file_size} bytes)")
|
||||
else:
|
||||
print(f"\n⚠️ Pluto texture not found at {texture_path}")
|
||||
print(" Please add a 2k_pluto.jpg file to upload/texture/ directory")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error adding Pluto: {e}")
|
||||
await session.rollback()
|
||||
raise
|
||||
finally:
|
||||
break
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(add_pluto())
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
-- 为 positions 表添加唯一约束
|
||||
-- 这样 ON CONFLICT 才能正常工作
|
||||
|
||||
-- 1. 先删除现有的重复数据(如果有)
|
||||
WITH duplicates AS (
|
||||
SELECT id,
|
||||
ROW_NUMBER() OVER (
|
||||
PARTITION BY body_id, time
|
||||
ORDER BY created_at DESC
|
||||
) as rn
|
||||
FROM positions
|
||||
)
|
||||
DELETE FROM positions
|
||||
WHERE id IN (
|
||||
SELECT id FROM duplicates WHERE rn > 1
|
||||
);
|
||||
|
||||
-- 2. 添加唯一约束
|
||||
ALTER TABLE positions
|
||||
ADD CONSTRAINT positions_body_time_unique
|
||||
UNIQUE (body_id, time);
|
||||
|
||||
-- 3. 验证约束已创建
|
||||
SELECT constraint_name, constraint_type
|
||||
FROM information_schema.table_constraints
|
||||
WHERE table_name = 'positions'
|
||||
AND constraint_type = 'UNIQUE';
|
||||
|
|
@ -0,0 +1,214 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
配置验证脚本 - 检查 PostgreSQL 和 Redis 配置是否正确
|
||||
|
||||
Usage:
|
||||
python scripts/check_config.py
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from app.config import settings
|
||||
import asyncpg
|
||||
import redis.asyncio as redis
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def check_postgresql():
|
||||
"""检查 PostgreSQL 连接"""
|
||||
print("\n" + "=" * 60)
|
||||
print("检查 PostgreSQL 配置")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
# 连接参数
|
||||
print(f"主机: {settings.database_host}")
|
||||
print(f"端口: {settings.database_port}")
|
||||
print(f"数据库: {settings.database_name}")
|
||||
print(f"用户: {settings.database_user}")
|
||||
print(f"连接池大小: {settings.database_pool_size}")
|
||||
|
||||
# 尝试连接
|
||||
conn = await asyncpg.connect(
|
||||
host=settings.database_host,
|
||||
port=settings.database_port,
|
||||
user=settings.database_user,
|
||||
password=settings.database_password,
|
||||
database=settings.database_name,
|
||||
)
|
||||
|
||||
# 查询版本
|
||||
version = await conn.fetchval("SELECT version()")
|
||||
print(f"\n✓ PostgreSQL 连接成功")
|
||||
print(f"版本: {version.split(',')[0]}")
|
||||
|
||||
# 查询数据库大小
|
||||
db_size = await conn.fetchval(
|
||||
"SELECT pg_size_pretty(pg_database_size($1))",
|
||||
settings.database_name
|
||||
)
|
||||
print(f"数据库大小: {db_size}")
|
||||
|
||||
# 查询表数量
|
||||
table_count = await conn.fetchval("""
|
||||
SELECT COUNT(*)
|
||||
FROM information_schema.tables
|
||||
WHERE table_schema = 'public'
|
||||
""")
|
||||
print(f"数据表数量: {table_count}")
|
||||
|
||||
await conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n✗ PostgreSQL 连接失败: {e}")
|
||||
print("\n请检查:")
|
||||
print(" 1. PostgreSQL 是否正在运行")
|
||||
print(" 2. 数据库是否已创建 (运行: python scripts/create_db.py)")
|
||||
print(" 3. .env 文件中的账号密码是否正确")
|
||||
return False
|
||||
|
||||
|
||||
async def check_redis():
|
||||
"""检查 Redis 连接"""
|
||||
print("\n" + "=" * 60)
|
||||
print("检查 Redis 配置")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
# 连接参数
|
||||
print(f"主机: {settings.redis_host}")
|
||||
print(f"端口: {settings.redis_port}")
|
||||
print(f"数据库: {settings.redis_db}")
|
||||
print(f"密码: {'(无)' if not settings.redis_password else '******'}")
|
||||
print(f"最大连接数: {settings.redis_max_connections}")
|
||||
|
||||
# 尝试连接
|
||||
client = redis.from_url(
|
||||
settings.redis_url,
|
||||
encoding="utf-8",
|
||||
decode_responses=True,
|
||||
)
|
||||
|
||||
# 测试连接
|
||||
await client.ping()
|
||||
print(f"\n✓ Redis 连接成功")
|
||||
|
||||
# 获取 Redis 信息
|
||||
info = await client.info()
|
||||
print(f"版本: {info.get('redis_version')}")
|
||||
print(f"使用内存: {info.get('used_memory_human')}")
|
||||
print(f"已连接客户端: {info.get('connected_clients')}")
|
||||
print(f"运行天数: {info.get('uptime_in_days')} 天")
|
||||
|
||||
await client.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n⚠ Redis 连接失败: {e}")
|
||||
print("\n说明:")
|
||||
print(" Redis 是可选的缓存服务")
|
||||
print(" 如果 Redis 不可用,应用会自动降级为内存缓存")
|
||||
print(" 不影响核心功能,但会失去跨进程缓存能力")
|
||||
print("\n如需启用 Redis:")
|
||||
print(" - macOS: brew install redis && brew services start redis")
|
||||
print(" - Ubuntu: sudo apt install redis && sudo systemctl start redis")
|
||||
return False
|
||||
|
||||
|
||||
def check_env_file():
|
||||
"""检查 .env 文件"""
|
||||
print("\n" + "=" * 60)
|
||||
print("检查配置文件")
|
||||
print("=" * 60)
|
||||
|
||||
env_path = Path(__file__).parent.parent / ".env"
|
||||
|
||||
if env_path.exists():
|
||||
print(f"✓ .env 文件存在: {env_path}")
|
||||
print(f"文件大小: {env_path.stat().st_size} bytes")
|
||||
return True
|
||||
else:
|
||||
print(f"✗ .env 文件不存在")
|
||||
print(f"请从 .env.example 创建: cp .env.example .env")
|
||||
return False
|
||||
|
||||
|
||||
def check_upload_dir():
|
||||
"""检查上传目录"""
|
||||
print("\n" + "=" * 60)
|
||||
print("检查上传目录")
|
||||
print("=" * 60)
|
||||
|
||||
upload_dir = Path(__file__).parent.parent / settings.upload_dir
|
||||
|
||||
if upload_dir.exists():
|
||||
print(f"✓ 上传目录存在: {upload_dir}")
|
||||
return True
|
||||
else:
|
||||
print(f"⚠ 上传目录不存在: {upload_dir}")
|
||||
print(f"自动创建...")
|
||||
upload_dir.mkdir(parents=True, exist_ok=True)
|
||||
print(f"✓ 上传目录创建成功")
|
||||
return True
|
||||
|
||||
|
||||
async def main():
|
||||
"""主函数"""
|
||||
print("\n" + "=" * 60)
|
||||
print(" Cosmo 配置验证工具")
|
||||
print("=" * 60)
|
||||
|
||||
results = []
|
||||
|
||||
# 1. 检查配置文件
|
||||
results.append(("配置文件", check_env_file()))
|
||||
|
||||
# 2. 检查上传目录
|
||||
results.append(("上传目录", check_upload_dir()))
|
||||
|
||||
# 3. 检查 PostgreSQL
|
||||
results.append(("PostgreSQL", await check_postgresql()))
|
||||
|
||||
# 4. 检查 Redis
|
||||
results.append(("Redis", await check_redis()))
|
||||
|
||||
# 总结
|
||||
print("\n" + "=" * 60)
|
||||
print(" 配置检查总结")
|
||||
print("=" * 60)
|
||||
|
||||
for name, status in results:
|
||||
status_str = "✓" if status else "✗"
|
||||
print(f"{status_str} {name}")
|
||||
|
||||
# 判断是否所有必需服务都正常
|
||||
required_services = [results[0], results[1], results[2]] # 配置文件、上传目录、PostgreSQL
|
||||
all_required_ok = all(status for _, status in required_services)
|
||||
|
||||
if all_required_ok:
|
||||
print("\n" + "=" * 60)
|
||||
print(" ✓ 所有必需服务配置正确!")
|
||||
print("=" * 60)
|
||||
print("\n可以启动服务:")
|
||||
print(" python -m uvicorn app.main:app --reload")
|
||||
print("\n或者:")
|
||||
print(" python app/main.py")
|
||||
return 0
|
||||
else:
|
||||
print("\n" + "=" * 60)
|
||||
print(" ✗ 部分必需服务配置有问题")
|
||||
print("=" * 60)
|
||||
print("\n请先解决上述问题,然后重新运行此脚本")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit_code = asyncio.run(main())
|
||||
sys.exit(exit_code)
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
"""
|
||||
Check probe data in database
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add backend to path
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from sqlalchemy import create_engine, text
|
||||
from app.config import settings
|
||||
|
||||
|
||||
def check_probes():
|
||||
"""Check probe data directly with SQL"""
|
||||
engine = create_engine(settings.database_url.replace('+asyncpg', ''))
|
||||
|
||||
with engine.connect() as conn:
|
||||
# Check all celestial bodies
|
||||
result = conn.execute(text("""
|
||||
SELECT
|
||||
cb.id,
|
||||
cb.name,
|
||||
cb.name_zh,
|
||||
cb.type,
|
||||
cb.is_active,
|
||||
COUNT(p.id) as position_count
|
||||
FROM celestial_bodies cb
|
||||
LEFT JOIN positions p ON cb.id = p.body_id
|
||||
GROUP BY cb.id, cb.name, cb.name_zh, cb.type, cb.is_active
|
||||
ORDER BY cb.type, cb.name
|
||||
"""))
|
||||
|
||||
print("All Celestial Bodies:")
|
||||
print("=" * 100)
|
||||
for row in result:
|
||||
print(f"ID: {row.id:15s} | Name: {row.name:20s} | Type: {row.type:15s} | Active: {str(row.is_active):5s} | Positions: {row.position_count}")
|
||||
|
||||
print("\n" + "=" * 100)
|
||||
print("\nProbes only:")
|
||||
print("=" * 100)
|
||||
|
||||
result = conn.execute(text("""
|
||||
SELECT
|
||||
cb.id,
|
||||
cb.name,
|
||||
cb.name_zh,
|
||||
cb.is_active,
|
||||
COUNT(p.id) as position_count
|
||||
FROM celestial_bodies cb
|
||||
LEFT JOIN positions p ON cb.id = p.body_id
|
||||
WHERE cb.type = 'probe'
|
||||
GROUP BY cb.id, cb.name, cb.name_zh, cb.is_active
|
||||
ORDER BY cb.name
|
||||
"""))
|
||||
|
||||
for row in result:
|
||||
print(f"ID: {row.id:15s} | Name: {row.name:20s} ({row.name_zh}) | Active: {str(row.is_active):5s} | Positions: {row.position_count}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
check_probes()
|
||||
|
|
@ -0,0 +1,42 @@
|
|||
-- 清理数据库重复数据
|
||||
|
||||
-- 1. 清理 positions 表的重复数据
|
||||
-- 保留每个 (body_id, time) 组合的最新一条记录
|
||||
|
||||
WITH duplicates AS (
|
||||
SELECT id,
|
||||
ROW_NUMBER() OVER (
|
||||
PARTITION BY body_id, time
|
||||
ORDER BY created_at DESC
|
||||
) as rn
|
||||
FROM positions
|
||||
)
|
||||
DELETE FROM positions
|
||||
WHERE id IN (
|
||||
SELECT id FROM duplicates WHERE rn > 1
|
||||
);
|
||||
|
||||
-- 2. 清理 nasa_cache 表的重复数据
|
||||
-- 保留每个 cache_key 的最新一条记录
|
||||
|
||||
WITH duplicates AS (
|
||||
SELECT id,
|
||||
ROW_NUMBER() OVER (
|
||||
PARTITION BY cache_key
|
||||
ORDER BY created_at DESC
|
||||
) as rn
|
||||
FROM nasa_cache
|
||||
)
|
||||
DELETE FROM nasa_cache
|
||||
WHERE id IN (
|
||||
SELECT id FROM duplicates WHERE rn > 1
|
||||
);
|
||||
|
||||
-- 3. 验证清理结果
|
||||
SELECT 'Positions duplicates check' as check_name,
|
||||
COUNT(*) - COUNT(DISTINCT (body_id, time)) as duplicate_count
|
||||
FROM positions
|
||||
UNION ALL
|
||||
SELECT 'NASA cache duplicates check' as check_name,
|
||||
COUNT(*) - COUNT(DISTINCT cache_key) as duplicate_count
|
||||
FROM nasa_cache;
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Create PostgreSQL database for Cosmo
|
||||
|
||||
Usage:
|
||||
python scripts/create_db.py
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from app.config import settings
|
||||
import asyncpg
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def main():
|
||||
"""Create database if it doesn't exist"""
|
||||
# Connect to postgres database (default database)
|
||||
try:
|
||||
conn = await asyncpg.connect(
|
||||
host=settings.database_host,
|
||||
port=settings.database_port,
|
||||
user=settings.database_user,
|
||||
password=settings.database_password,
|
||||
database="postgres", # Connect to default database
|
||||
)
|
||||
|
||||
# Check if database exists
|
||||
exists = await conn.fetchval(
|
||||
"SELECT 1 FROM pg_database WHERE datname = $1",
|
||||
settings.database_name
|
||||
)
|
||||
|
||||
if exists:
|
||||
logger.info(f"✓ Database '{settings.database_name}' already exists")
|
||||
else:
|
||||
# Create database
|
||||
await conn.execute(f'CREATE DATABASE {settings.database_name}')
|
||||
logger.info(f"✓ Database '{settings.database_name}' created successfully")
|
||||
|
||||
await conn.close()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"✗ Failed to create database: {e}")
|
||||
logger.error("\nPlease ensure:")
|
||||
logger.error(" 1. PostgreSQL is running")
|
||||
logger.error(" 2. Database credentials in .env are correct")
|
||||
logger.error(f" 3. User '{settings.database_user}' has permission to create databases")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,88 @@
|
|||
-- ============================================================
|
||||
-- Create orbits table for storing precomputed orbital paths
|
||||
-- ============================================================
|
||||
-- Purpose: Store complete orbital trajectories for planets and dwarf planets
|
||||
-- This eliminates the need to query NASA Horizons API for orbit visualization
|
||||
--
|
||||
-- Usage:
|
||||
-- psql -U your_user -d cosmo < create_orbits_table.sql
|
||||
-- OR execute in your SQL client/tool
|
||||
--
|
||||
-- Version: 1.0
|
||||
-- Created: 2025-11-29
|
||||
-- ============================================================
|
||||
|
||||
-- Create orbits table
|
||||
CREATE TABLE IF NOT EXISTS orbits (
|
||||
id SERIAL PRIMARY KEY,
|
||||
body_id TEXT NOT NULL,
|
||||
points JSONB NOT NULL, -- Array of orbital points: [{"x": 1.0, "y": 0.0, "z": 0.0}, ...]
|
||||
num_points INTEGER NOT NULL, -- Number of points in the orbit
|
||||
period_days FLOAT, -- Orbital period in days
|
||||
color VARCHAR(20), -- Orbit line color (hex format: #RRGGBB)
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
CONSTRAINT orbits_body_id_unique UNIQUE(body_id),
|
||||
CONSTRAINT orbits_body_id_fkey FOREIGN KEY (body_id) REFERENCES celestial_bodies(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Create index on body_id for fast lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_orbits_body_id ON orbits(body_id);
|
||||
|
||||
-- Create index on updated_at for tracking data freshness
|
||||
CREATE INDEX IF NOT EXISTS idx_orbits_updated_at ON orbits(updated_at);
|
||||
|
||||
-- Add comments to table
|
||||
COMMENT ON TABLE orbits IS 'Precomputed orbital paths for celestial bodies';
|
||||
COMMENT ON COLUMN orbits.body_id IS 'Foreign key to celestial_bodies.id';
|
||||
COMMENT ON COLUMN orbits.points IS 'Array of 3D points (x,y,z in AU) defining the orbital path';
|
||||
COMMENT ON COLUMN orbits.num_points IS 'Total number of points in the orbit';
|
||||
COMMENT ON COLUMN orbits.period_days IS 'Orbital period in Earth days';
|
||||
COMMENT ON COLUMN orbits.color IS 'Hex color code for rendering the orbit line';
|
||||
|
||||
-- ============================================================
|
||||
-- Sample data for testing (optional - can be removed)
|
||||
-- ============================================================
|
||||
-- Uncomment below to insert sample orbit for Earth
|
||||
/*
|
||||
INSERT INTO orbits (body_id, points, num_points, period_days, color)
|
||||
VALUES (
|
||||
'399', -- Earth
|
||||
'[
|
||||
{"x": 1.0, "y": 0.0, "z": 0.0},
|
||||
{"x": 0.707, "y": 0.707, "z": 0.0},
|
||||
{"x": 0.0, "y": 1.0, "z": 0.0},
|
||||
{"x": -0.707, "y": 0.707, "z": 0.0},
|
||||
{"x": -1.0, "y": 0.0, "z": 0.0},
|
||||
{"x": -0.707, "y": -0.707, "z": 0.0},
|
||||
{"x": 0.0, "y": -1.0, "z": 0.0},
|
||||
{"x": 0.707, "y": -0.707, "z": 0.0}
|
||||
]'::jsonb,
|
||||
8,
|
||||
365.25,
|
||||
'#4A90E2'
|
||||
)
|
||||
ON CONFLICT (body_id) DO UPDATE
|
||||
SET
|
||||
points = EXCLUDED.points,
|
||||
num_points = EXCLUDED.num_points,
|
||||
period_days = EXCLUDED.period_days,
|
||||
color = EXCLUDED.color,
|
||||
updated_at = NOW();
|
||||
*/
|
||||
|
||||
-- ============================================================
|
||||
-- Verification queries (execute separately if needed)
|
||||
-- ============================================================
|
||||
-- Check if table was created successfully
|
||||
-- SELECT schemaname, tablename, tableowner FROM pg_tables WHERE tablename = 'orbits';
|
||||
|
||||
-- Check indexes
|
||||
-- SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'orbits';
|
||||
|
||||
-- Show table structure
|
||||
-- SELECT column_name, data_type, is_nullable, column_default
|
||||
-- FROM information_schema.columns
|
||||
-- WHERE table_name = 'orbits'
|
||||
-- ORDER BY ordinal_position;
|
||||
|
||||
|
|
@ -0,0 +1,200 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Fetch celestial body positions from NASA Horizons API and cache them
|
||||
|
||||
This script:
|
||||
1. Fetches position data for all celestial bodies
|
||||
2. Caches data in Redis (L2 cache)
|
||||
3. Saves data to PostgreSQL (L3 cache/persistent storage)
|
||||
|
||||
Usage:
|
||||
python scripts/fetch_and_cache.py [--days DAYS]
|
||||
|
||||
Options:
|
||||
--days DAYS Number of days to fetch (default: 7)
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta
|
||||
import argparse
|
||||
import logging
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from app.services.horizons import horizons_service
|
||||
from app.services.db_service import (
|
||||
celestial_body_service,
|
||||
position_service,
|
||||
nasa_cache_service
|
||||
)
|
||||
from app.services.redis_cache import redis_cache, cache_nasa_response
|
||||
from app.config import settings
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def fetch_and_cache_body(body_id: str, body_name: str, days: int = 7):
|
||||
"""Fetch and cache position data for a single celestial body"""
|
||||
logger.info(f"Fetching data for {body_name} ({body_id})...")
|
||||
|
||||
try:
|
||||
# Calculate time range
|
||||
now = datetime.utcnow()
|
||||
start_time = now
|
||||
end_time = now + timedelta(days=days)
|
||||
step = "1d"
|
||||
|
||||
# Fetch positions from NASA API (synchronous call in async context)
|
||||
loop = asyncio.get_event_loop()
|
||||
positions = await loop.run_in_executor(
|
||||
None,
|
||||
horizons_service.get_body_positions,
|
||||
body_id,
|
||||
start_time,
|
||||
end_time,
|
||||
step
|
||||
)
|
||||
|
||||
if not positions:
|
||||
logger.warning(f"No positions returned for {body_name}")
|
||||
return False
|
||||
|
||||
logger.info(f"Fetched {len(positions)} positions for {body_name}")
|
||||
|
||||
# Prepare data for caching
|
||||
position_data = [
|
||||
{
|
||||
"time": pos.time,
|
||||
"x": pos.x,
|
||||
"y": pos.y,
|
||||
"z": pos.z,
|
||||
}
|
||||
for pos in positions
|
||||
]
|
||||
|
||||
# Cache in Redis (L2)
|
||||
redis_cached = await cache_nasa_response(
|
||||
body_id=body_id,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
step=step,
|
||||
data=position_data
|
||||
)
|
||||
|
||||
if redis_cached:
|
||||
logger.info(f"✓ Cached {body_name} data in Redis")
|
||||
else:
|
||||
logger.warning(f"⚠ Failed to cache {body_name} data in Redis")
|
||||
|
||||
# Save to PostgreSQL (L3 - persistent storage)
|
||||
# Save raw NASA response for future cache hits
|
||||
await nasa_cache_service.save_response(
|
||||
body_id=body_id,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
step=step,
|
||||
response_data={"positions": position_data},
|
||||
ttl_days=settings.cache_ttl_days
|
||||
)
|
||||
logger.info(f"✓ Cached {body_name} data in PostgreSQL (nasa_cache)")
|
||||
|
||||
# Save positions to positions table for querying
|
||||
saved_count = await position_service.save_positions(
|
||||
body_id=body_id,
|
||||
positions=position_data,
|
||||
source="nasa_horizons"
|
||||
)
|
||||
logger.info(f"✓ Saved {saved_count} positions for {body_name} in PostgreSQL")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"✗ Failed to fetch/cache {body_name}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Fetch and cache data for all celestial bodies"""
|
||||
parser = argparse.ArgumentParser(description='Fetch and cache celestial body positions')
|
||||
parser.add_argument('--days', type=int, default=7, help='Number of days to fetch (default: 7)')
|
||||
args = parser.parse_args()
|
||||
|
||||
logger.info("=" * 60)
|
||||
logger.info("Fetch and Cache NASA Horizons Data")
|
||||
logger.info("=" * 60)
|
||||
logger.info(f"Time range: {args.days} days from now")
|
||||
logger.info("=" * 60)
|
||||
|
||||
# Connect to Redis
|
||||
await redis_cache.connect()
|
||||
|
||||
try:
|
||||
# Get all celestial bodies from database
|
||||
bodies = await celestial_body_service.get_all_bodies()
|
||||
logger.info(f"\nFound {len(bodies)} celestial bodies in database")
|
||||
|
||||
# Filter for probes and planets (skip stars)
|
||||
bodies_to_fetch = [
|
||||
body for body in bodies
|
||||
if body.type in ['probe', 'planet']
|
||||
]
|
||||
logger.info(f"Will fetch data for {len(bodies_to_fetch)} bodies (probes + planets)")
|
||||
|
||||
# Fetch and cache data for each body
|
||||
success_count = 0
|
||||
fail_count = 0
|
||||
|
||||
for i, body in enumerate(bodies_to_fetch, 1):
|
||||
logger.info(f"\n[{i}/{len(bodies_to_fetch)}] Processing {body.name}...")
|
||||
success = await fetch_and_cache_body(
|
||||
body_id=body.id,
|
||||
body_name=body.name,
|
||||
days=args.days
|
||||
)
|
||||
|
||||
if success:
|
||||
success_count += 1
|
||||
else:
|
||||
fail_count += 1
|
||||
|
||||
# Small delay to avoid overwhelming NASA API
|
||||
if i < len(bodies_to_fetch):
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
# Summary
|
||||
logger.info("\n" + "=" * 60)
|
||||
logger.info("Summary")
|
||||
logger.info("=" * 60)
|
||||
logger.info(f"✓ Successfully cached: {success_count} bodies")
|
||||
if fail_count > 0:
|
||||
logger.warning(f"✗ Failed: {fail_count} bodies")
|
||||
logger.info("=" * 60)
|
||||
|
||||
# Check cache status
|
||||
redis_stats = await redis_cache.get_stats()
|
||||
if redis_stats.get("connected"):
|
||||
logger.info("\nRedis Cache Status:")
|
||||
logger.info(f" Memory: {redis_stats.get('used_memory_human')}")
|
||||
logger.info(f" Clients: {redis_stats.get('connected_clients')}")
|
||||
logger.info(f" Hits: {redis_stats.get('keyspace_hits')}")
|
||||
logger.info(f" Misses: {redis_stats.get('keyspace_misses')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"\n✗ Failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
finally:
|
||||
# Disconnect from Redis
|
||||
await redis_cache.disconnect()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,79 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database initialization script
|
||||
|
||||
Creates all tables in the PostgreSQL database.
|
||||
|
||||
Usage:
|
||||
python scripts/init_db.py
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path to import app modules
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from app.database import init_db, close_db, engine
|
||||
from app.config import settings
|
||||
from sqlalchemy import text
|
||||
import logging
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def main():
|
||||
"""Initialize database"""
|
||||
logger.info("=" * 60)
|
||||
logger.info("Cosmo Database Initialization")
|
||||
logger.info("=" * 60)
|
||||
logger.info(f"Database URL: {settings.database_url.split('@')[1]}") # Hide password
|
||||
logger.info("=" * 60)
|
||||
|
||||
try:
|
||||
# Test database connection
|
||||
logger.info("Testing database connection...")
|
||||
async with engine.begin() as conn:
|
||||
await conn.execute(text("SELECT 1"))
|
||||
logger.info("✓ Database connection successful")
|
||||
|
||||
# Create all tables
|
||||
logger.info("Creating database tables...")
|
||||
await init_db()
|
||||
logger.info("✓ All tables created successfully")
|
||||
|
||||
# Display created tables
|
||||
async with engine.connect() as conn:
|
||||
result = await conn.execute(text("""
|
||||
SELECT table_name
|
||||
FROM information_schema.tables
|
||||
WHERE table_schema = 'public'
|
||||
ORDER BY table_name
|
||||
"""))
|
||||
tables = [row[0] for row in result]
|
||||
|
||||
logger.info(f"\nCreated {len(tables)} tables:")
|
||||
for table in tables:
|
||||
logger.info(f" - {table}")
|
||||
|
||||
logger.info("\n" + "=" * 60)
|
||||
logger.info("Database initialization completed successfully!")
|
||||
logger.info("=" * 60)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"\n✗ Database initialization failed: {e}")
|
||||
logger.error("\nPlease ensure:")
|
||||
logger.error(" 1. PostgreSQL is running")
|
||||
logger.error(" 2. Database 'cosmo_db' exists")
|
||||
logger.error(" 3. Database credentials in .env are correct")
|
||||
sys.exit(1)
|
||||
finally:
|
||||
await close_db()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
"""
|
||||
List celestial bodies from database
|
||||
"""
|
||||
import asyncio
|
||||
from app.database import get_db
|
||||
from app.models.db.celestial_body import CelestialBody
|
||||
|
||||
|
||||
async def list_celestial_bodies():
|
||||
"""List all celestial bodies"""
|
||||
async for session in get_db():
|
||||
try:
|
||||
from sqlalchemy import select
|
||||
|
||||
stmt = select(CelestialBody).order_by(CelestialBody.type, CelestialBody.id)
|
||||
result = await session.execute(stmt)
|
||||
bodies = result.scalars().all()
|
||||
|
||||
print(f"\n📊 Found {len(bodies)} celestial bodies:\n")
|
||||
print(f"{'ID':<20} {'Name':<25} {'Type':<10}")
|
||||
print("=" * 60)
|
||||
|
||||
for body in bodies:
|
||||
print(f"{body.id:<20} {body.name:<25} {body.type:<10}")
|
||||
|
||||
finally:
|
||||
break
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(list_celestial_bodies())
|
||||
|
|
@ -0,0 +1,184 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Data migration script
|
||||
|
||||
Migrates existing data from code/JSON files to PostgreSQL database:
|
||||
1. CELESTIAL_BODIES dict → celestial_bodies table
|
||||
2. Frontend JSON files → static_data table
|
||||
|
||||
Usage:
|
||||
python scripts/migrate_data.py [--force | --skip-existing]
|
||||
|
||||
Options:
|
||||
--force Overwrite existing data without prompting
|
||||
--skip-existing Skip migration if data already exists
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import json
|
||||
import argparse
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from app.database import AsyncSessionLocal
|
||||
from app.models.celestial import CELESTIAL_BODIES
|
||||
from app.models.db import CelestialBody, StaticData
|
||||
from sqlalchemy import select
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def migrate_celestial_bodies(force: bool = False, skip_existing: bool = False):
|
||||
"""Migrate CELESTIAL_BODIES dict to database"""
|
||||
logger.info("=" * 60)
|
||||
logger.info("Migrating celestial bodies...")
|
||||
logger.info("=" * 60)
|
||||
|
||||
async with AsyncSessionLocal() as session:
|
||||
# Check if data already exists
|
||||
result = await session.execute(select(CelestialBody))
|
||||
existing_count = len(result.scalars().all())
|
||||
|
||||
if existing_count > 0:
|
||||
logger.warning(f"Found {existing_count} existing celestial bodies in database")
|
||||
|
||||
if skip_existing:
|
||||
logger.info("Skipping celestial bodies migration (--skip-existing)")
|
||||
return
|
||||
|
||||
if not force:
|
||||
response = input("Do you want to overwrite? (yes/no): ")
|
||||
if response.lower() not in ['yes', 'y']:
|
||||
logger.info("Skipping celestial bodies migration")
|
||||
return
|
||||
else:
|
||||
logger.info("Overwriting existing data (--force)")
|
||||
|
||||
# Delete existing data
|
||||
from sqlalchemy import text
|
||||
await session.execute(text("DELETE FROM celestial_bodies"))
|
||||
logger.info(f"Deleted {existing_count} existing records")
|
||||
|
||||
# Insert new data
|
||||
count = 0
|
||||
for body_id, info in CELESTIAL_BODIES.items():
|
||||
body = CelestialBody(
|
||||
id=body_id,
|
||||
name=info["name"],
|
||||
name_zh=info.get("name_zh"),
|
||||
type=info["type"],
|
||||
description=info.get("description"),
|
||||
extra_data={
|
||||
"launch_date": info.get("launch_date"),
|
||||
"status": info.get("status"),
|
||||
} if "launch_date" in info or "status" in info else None
|
||||
)
|
||||
session.add(body)
|
||||
count += 1
|
||||
|
||||
await session.commit()
|
||||
logger.info(f"✓ Migrated {count} celestial bodies")
|
||||
|
||||
|
||||
async def migrate_static_data(force: bool = False, skip_existing: bool = False):
|
||||
"""Migrate frontend JSON files to database"""
|
||||
logger.info("=" * 60)
|
||||
logger.info("Migrating static data from JSON files...")
|
||||
logger.info("=" * 60)
|
||||
|
||||
# Define JSON files to migrate
|
||||
frontend_data_dir = Path(__file__).parent.parent.parent / "frontend" / "public" / "data"
|
||||
json_files = {
|
||||
"nearby-stars.json": "star",
|
||||
"constellations.json": "constellation",
|
||||
"galaxies.json": "galaxy",
|
||||
}
|
||||
|
||||
async with AsyncSessionLocal() as session:
|
||||
for filename, category in json_files.items():
|
||||
file_path = frontend_data_dir / filename
|
||||
if not file_path.exists():
|
||||
logger.warning(f"File not found: {file_path}")
|
||||
continue
|
||||
|
||||
# Load JSON data
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
data_list = json.load(f)
|
||||
|
||||
# Check if category data already exists
|
||||
result = await session.execute(
|
||||
select(StaticData).where(StaticData.category == category)
|
||||
)
|
||||
existing = result.scalars().all()
|
||||
|
||||
if existing:
|
||||
logger.warning(f"Found {len(existing)} existing {category} records")
|
||||
|
||||
if skip_existing:
|
||||
logger.info(f"Skipping {category} migration (--skip-existing)")
|
||||
continue
|
||||
|
||||
if not force:
|
||||
response = input(f"Overwrite {category} data? (yes/no): ")
|
||||
if response.lower() not in ['yes', 'y']:
|
||||
logger.info(f"Skipping {category} migration")
|
||||
continue
|
||||
else:
|
||||
logger.info(f"Overwriting {category} data (--force)")
|
||||
|
||||
# Delete existing
|
||||
for record in existing:
|
||||
await session.delete(record)
|
||||
|
||||
# Insert new data
|
||||
count = 0
|
||||
for item in data_list:
|
||||
static_item = StaticData(
|
||||
category=category,
|
||||
name=item.get("name", "Unknown"),
|
||||
name_zh=item.get("name_zh"),
|
||||
data=item
|
||||
)
|
||||
session.add(static_item)
|
||||
count += 1
|
||||
|
||||
await session.commit()
|
||||
logger.info(f"✓ Migrated {count} {category} records")
|
||||
|
||||
|
||||
async def main():
|
||||
"""Run all migrations"""
|
||||
# Parse command line arguments
|
||||
parser = argparse.ArgumentParser(description='Migrate data to PostgreSQL database')
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument('--force', action='store_true', help='Overwrite existing data without prompting')
|
||||
group.add_argument('--skip-existing', action='store_true', help='Skip migration if data already exists')
|
||||
args = parser.parse_args()
|
||||
|
||||
logger.info("\n" + "=" * 60)
|
||||
logger.info("Cosmo Data Migration")
|
||||
logger.info("=" * 60 + "\n")
|
||||
|
||||
try:
|
||||
# Migrate celestial bodies
|
||||
await migrate_celestial_bodies(force=args.force, skip_existing=args.skip_existing)
|
||||
|
||||
# Migrate static data
|
||||
await migrate_static_data(force=args.force, skip_existing=args.skip_existing)
|
||||
|
||||
logger.info("\n" + "=" * 60)
|
||||
logger.info("✓ Migration completed successfully!")
|
||||
logger.info("=" * 60)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"\n✗ Migration failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,143 @@
|
|||
"""
|
||||
Populate resources table with texture and model files
|
||||
"""
|
||||
import asyncio
|
||||
import os
|
||||
from pathlib import Path
|
||||
from sqlalchemy.dialects.postgresql import insert as pg_insert
|
||||
from app.database import get_db
|
||||
from app.models.db.resource import Resource
|
||||
|
||||
|
||||
# Mapping of texture files to celestial body IDs (use numeric Horizons IDs)
|
||||
TEXTURE_MAPPING = {
|
||||
"2k_sun.jpg": {"body_id": "10", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_mercury.jpg": {"body_id": "199", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_venus_surface.jpg": {"body_id": "299", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_venus_atmosphere.jpg": {"body_id": "299", "resource_type": "texture", "mime_type": "image/jpeg", "extra_data": {"layer": "atmosphere"}},
|
||||
"2k_earth_daymap.jpg": {"body_id": "399", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_earth_nightmap.jpg": {"body_id": "399", "resource_type": "texture", "mime_type": "image/jpeg", "extra_data": {"layer": "night"}},
|
||||
"2k_moon.jpg": {"body_id": "301", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_mars.jpg": {"body_id": "499", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_jupiter.jpg": {"body_id": "599", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_saturn.jpg": {"body_id": "699", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_saturn_ring_alpha.png": {"body_id": "699", "resource_type": "texture", "mime_type": "image/png", "extra_data": {"layer": "ring"}},
|
||||
"2k_uranus.jpg": {"body_id": "799", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_neptune.jpg": {"body_id": "899", "resource_type": "texture", "mime_type": "image/jpeg"},
|
||||
"2k_stars_milky_way.jpg": {"body_id": None, "resource_type": "texture", "mime_type": "image/jpeg", "extra_data": {"usage": "skybox"}},
|
||||
}
|
||||
|
||||
# Mapping of model files to celestial body IDs (use numeric probe IDs)
|
||||
MODEL_MAPPING = {
|
||||
"voyager_1.glb": {"body_id": "-31", "resource_type": "model", "mime_type": "model/gltf-binary"},
|
||||
"voyager_2.glb": {"body_id": "-32", "resource_type": "model", "mime_type": "model/gltf-binary"},
|
||||
"juno.glb": {"body_id": "-61", "resource_type": "model", "mime_type": "model/gltf-binary"},
|
||||
"parker_solar_probe.glb": {"body_id": "-96", "resource_type": "model", "mime_type": "model/gltf-binary"},
|
||||
"cassini.glb": {"body_id": "-82", "resource_type": "model", "mime_type": "model/gltf-binary"},
|
||||
}
|
||||
|
||||
|
||||
async def populate_resources():
|
||||
"""Populate resources table with texture and model files"""
|
||||
|
||||
# Get upload directory path
|
||||
upload_dir = Path(__file__).parent.parent / "upload"
|
||||
texture_dir = upload_dir / "texture"
|
||||
model_dir = upload_dir / "model"
|
||||
|
||||
print(f"📂 Scanning upload directory: {upload_dir}")
|
||||
print(f"📂 Texture directory: {texture_dir}")
|
||||
print(f"📂 Model directory: {model_dir}")
|
||||
|
||||
async for session in get_db():
|
||||
try:
|
||||
# Process textures
|
||||
print("\n🖼️ Processing textures...")
|
||||
texture_count = 0
|
||||
for filename, mapping in TEXTURE_MAPPING.items():
|
||||
file_path = texture_dir / filename
|
||||
if not file_path.exists():
|
||||
print(f"⚠️ Warning: Texture file not found: {filename}")
|
||||
continue
|
||||
|
||||
file_size = file_path.stat().st_size
|
||||
|
||||
# Prepare resource data
|
||||
resource_data = {
|
||||
"body_id": mapping["body_id"],
|
||||
"resource_type": mapping["resource_type"],
|
||||
"file_path": f"texture/{filename}",
|
||||
"file_size": file_size,
|
||||
"mime_type": mapping["mime_type"],
|
||||
"extra_data": mapping.get("extra_data"),
|
||||
}
|
||||
|
||||
# Use upsert to avoid duplicates
|
||||
stmt = pg_insert(Resource).values(**resource_data)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['body_id', 'resource_type', 'file_path'],
|
||||
set_={
|
||||
'file_size': file_size,
|
||||
'mime_type': mapping["mime_type"],
|
||||
'extra_data': mapping.get("extra_data"),
|
||||
}
|
||||
)
|
||||
|
||||
await session.execute(stmt)
|
||||
texture_count += 1
|
||||
print(f" ✅ {filename} -> {mapping['body_id'] or 'global'} ({file_size} bytes)")
|
||||
|
||||
# Process models
|
||||
print("\n🚀 Processing models...")
|
||||
model_count = 0
|
||||
for filename, mapping in MODEL_MAPPING.items():
|
||||
file_path = model_dir / filename
|
||||
if not file_path.exists():
|
||||
print(f"⚠️ Warning: Model file not found: {filename}")
|
||||
continue
|
||||
|
||||
file_size = file_path.stat().st_size
|
||||
|
||||
# Prepare resource data
|
||||
resource_data = {
|
||||
"body_id": mapping["body_id"],
|
||||
"resource_type": mapping["resource_type"],
|
||||
"file_path": f"model/{filename}",
|
||||
"file_size": file_size,
|
||||
"mime_type": mapping["mime_type"],
|
||||
"extra_data": mapping.get("extra_data"),
|
||||
}
|
||||
|
||||
# Use upsert to avoid duplicates
|
||||
stmt = pg_insert(Resource).values(**resource_data)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['body_id', 'resource_type', 'file_path'],
|
||||
set_={
|
||||
'file_size': file_size,
|
||||
'mime_type': mapping["mime_type"],
|
||||
'extra_data': mapping.get("extra_data"),
|
||||
}
|
||||
)
|
||||
|
||||
await session.execute(stmt)
|
||||
model_count += 1
|
||||
print(f" ✅ {filename} -> {mapping['body_id']} ({file_size} bytes)")
|
||||
|
||||
# Commit all changes
|
||||
await session.commit()
|
||||
|
||||
print(f"\n✨ Successfully populated resources table:")
|
||||
print(f" 📊 Textures: {texture_count}")
|
||||
print(f" 📊 Models: {model_count}")
|
||||
print(f" 📊 Total: {texture_count + model_count}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error populating resources: {e}")
|
||||
await session.rollback()
|
||||
raise
|
||||
finally:
|
||||
break
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(populate_resources())
|
||||
|
|
@ -0,0 +1,223 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Historical Data Prefetch Script
|
||||
|
||||
This script prefetches historical position data for all celestial bodies
|
||||
and stores them in the database for fast retrieval.
|
||||
|
||||
Usage:
|
||||
# Prefetch last 12 months
|
||||
python scripts/prefetch_historical_data.py --months 12
|
||||
|
||||
# Prefetch specific year-month
|
||||
python scripts/prefetch_historical_data.py --year 2024 --month 1
|
||||
|
||||
# Prefetch a range
|
||||
python scripts/prefetch_historical_data.py --start-year 2023 --start-month 1 --end-year 2023 --end-month 12
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import asyncio
|
||||
import argparse
|
||||
from datetime import datetime, timedelta
|
||||
from dateutil.relativedelta import relativedelta
|
||||
|
||||
# Add backend to path
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
|
||||
|
||||
from app.database import get_db
|
||||
from app.services.horizons import horizons_service
|
||||
from app.services.db_service import position_service, celestial_body_service
|
||||
|
||||
|
||||
async def prefetch_month(year: int, month: int, session):
|
||||
"""
|
||||
Prefetch data for a specific month
|
||||
|
||||
Args:
|
||||
year: Year (e.g., 2023)
|
||||
month: Month (1-12)
|
||||
session: Database session
|
||||
"""
|
||||
# Calculate start and end of month
|
||||
start_date = datetime(year, month, 1, 0, 0, 0)
|
||||
if month == 12:
|
||||
end_date = datetime(year + 1, 1, 1, 0, 0, 0)
|
||||
else:
|
||||
end_date = datetime(year, month + 1, 1, 0, 0, 0)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"📅 Prefetching data for {year}-{month:02d}")
|
||||
print(f" Period: {start_date.date()} to {end_date.date()}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
# Get all celestial bodies from database
|
||||
all_bodies = await celestial_body_service.get_all_bodies(session)
|
||||
total_bodies = len(all_bodies)
|
||||
success_count = 0
|
||||
skip_count = 0
|
||||
error_count = 0
|
||||
|
||||
for idx, body in enumerate(all_bodies, 1):
|
||||
body_id = body.id
|
||||
body_name = body.name
|
||||
|
||||
try:
|
||||
# Check if we already have data for this month
|
||||
existing_positions = await position_service.get_positions_in_range(
|
||||
body_id, start_date, end_date, session
|
||||
)
|
||||
|
||||
if existing_positions and len(existing_positions) > 0:
|
||||
print(f" [{idx}/{total_bodies}] ⏭️ {body_name:20s} - Already exists ({len(existing_positions)} positions)")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
print(f" [{idx}/{total_bodies}] 🔄 {body_name:20s} - Fetching...", end='', flush=True)
|
||||
|
||||
# Query NASA Horizons API for this month
|
||||
# Sample every 7 days to reduce data volume
|
||||
step = "7d"
|
||||
|
||||
if body_id == "10":
|
||||
# Sun is always at origin
|
||||
positions = [
|
||||
{"time": start_date, "x": 0.0, "y": 0.0, "z": 0.0},
|
||||
{"time": end_date, "x": 0.0, "y": 0.0, "z": 0.0},
|
||||
]
|
||||
elif body_id == "-82":
|
||||
# Cassini mission ended 2017-09-15
|
||||
if year < 2017 or (year == 2017 and month <= 9):
|
||||
cassini_date = datetime(2017, 9, 15, 11, 58, 0)
|
||||
positions_data = horizons_service.get_body_positions(
|
||||
body_id, cassini_date, cassini_date, step
|
||||
)
|
||||
positions = [
|
||||
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
|
||||
for p in positions_data
|
||||
]
|
||||
else:
|
||||
print(f" ⏭️ Mission ended", flush=True)
|
||||
skip_count += 1
|
||||
continue
|
||||
else:
|
||||
# Query other bodies
|
||||
positions_data = horizons_service.get_body_positions(
|
||||
body_id, start_date, end_date, step
|
||||
)
|
||||
positions = [
|
||||
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
|
||||
for p in positions_data
|
||||
]
|
||||
|
||||
# Store in database
|
||||
for pos_data in positions:
|
||||
await position_service.save_position(
|
||||
body_id=body_id,
|
||||
time=pos_data["time"],
|
||||
x=pos_data["x"],
|
||||
y=pos_data["y"],
|
||||
z=pos_data["z"],
|
||||
source="nasa_horizons",
|
||||
session=session,
|
||||
)
|
||||
|
||||
print(f" ✅ Saved {len(positions)} positions", flush=True)
|
||||
success_count += 1
|
||||
|
||||
# Small delay to avoid overwhelming NASA API
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Error: {str(e)}", flush=True)
|
||||
error_count += 1
|
||||
continue
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"📊 Summary for {year}-{month:02d}:")
|
||||
print(f" ✅ Success: {success_count}")
|
||||
print(f" ⏭️ Skipped: {skip_count}")
|
||||
print(f" ❌ Errors: {error_count}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
return success_count, skip_count, error_count
|
||||
|
||||
|
||||
async def main():
|
||||
parser = argparse.ArgumentParser(description="Prefetch historical celestial data")
|
||||
parser.add_argument("--months", type=int, help="Number of months to prefetch from now (default: 12)")
|
||||
parser.add_argument("--year", type=int, help="Specific year to prefetch")
|
||||
parser.add_argument("--month", type=int, help="Specific month to prefetch (1-12)")
|
||||
parser.add_argument("--start-year", type=int, help="Start year for range")
|
||||
parser.add_argument("--start-month", type=int, help="Start month for range (1-12)")
|
||||
parser.add_argument("--end-year", type=int, help="End year for range")
|
||||
parser.add_argument("--end-month", type=int, help="End month for range (1-12)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine date range
|
||||
months_to_fetch = []
|
||||
|
||||
if args.year and args.month:
|
||||
# Single month
|
||||
months_to_fetch.append((args.year, args.month))
|
||||
elif args.start_year and args.start_month and args.end_year and args.end_month:
|
||||
# Date range
|
||||
current = datetime(args.start_year, args.start_month, 1)
|
||||
end = datetime(args.end_year, args.end_month, 1)
|
||||
while current <= end:
|
||||
months_to_fetch.append((current.year, current.month))
|
||||
current += relativedelta(months=1)
|
||||
else:
|
||||
# Default: last N months
|
||||
months = args.months or 12
|
||||
current = datetime.now()
|
||||
for i in range(months):
|
||||
past_date = current - relativedelta(months=i)
|
||||
months_to_fetch.append((past_date.year, past_date.month))
|
||||
months_to_fetch.reverse() # Start from oldest
|
||||
|
||||
if not months_to_fetch:
|
||||
print("❌ No months to fetch. Please specify a valid date range.")
|
||||
return
|
||||
|
||||
print(f"\n🚀 Historical Data Prefetch Script")
|
||||
print(f"{'='*60}")
|
||||
print(f"📅 Total months to fetch: {len(months_to_fetch)}")
|
||||
print(f" From: {months_to_fetch[0][0]}-{months_to_fetch[0][1]:02d}")
|
||||
print(f" To: {months_to_fetch[-1][0]}-{months_to_fetch[-1][1]:02d}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
total_success = 0
|
||||
total_skip = 0
|
||||
total_error = 0
|
||||
|
||||
async for session in get_db():
|
||||
start_time = datetime.now()
|
||||
|
||||
for year, month in months_to_fetch:
|
||||
success, skip, error = await prefetch_month(year, month, session)
|
||||
total_success += success
|
||||
total_skip += skip
|
||||
total_error += error
|
||||
|
||||
end_time = datetime.now()
|
||||
duration = end_time - start_time
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"🎉 Prefetch Complete!")
|
||||
print(f"{'='*60}")
|
||||
print(f"📊 Overall Summary:")
|
||||
print(f" Total months processed: {len(months_to_fetch)}")
|
||||
print(f" ✅ Total success: {total_success}")
|
||||
print(f" ⏭️ Total skipped: {total_skip}")
|
||||
print(f" ❌ Total errors: {total_error}")
|
||||
print(f" ⏱️ Duration: {duration}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
break
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
"""
|
||||
Recreate resources table with unique constraint
|
||||
"""
|
||||
import asyncio
|
||||
from app.database import engine
|
||||
from app.models.db.resource import Resource
|
||||
from sqlalchemy import text
|
||||
|
||||
|
||||
async def recreate_resources_table():
|
||||
"""Drop and recreate resources table"""
|
||||
async with engine.begin() as conn:
|
||||
# Drop the table
|
||||
print("🗑️ Dropping resources table...")
|
||||
await conn.execute(text("DROP TABLE IF EXISTS resources CASCADE"))
|
||||
print("✓ Table dropped")
|
||||
|
||||
# Recreate the table
|
||||
print("📦 Creating resources table with new schema...")
|
||||
await conn.run_sync(Resource.metadata.create_all)
|
||||
print("✓ Table created")
|
||||
|
||||
print("\n✨ Resources table recreated successfully!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(recreate_resources_table())
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
"""
|
||||
Reset admin user password to 'cosmo'
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
sys.path.insert(0, '/Users/jiliu/WorkSpace/cosmo/backend')
|
||||
|
||||
from sqlalchemy import select, update
|
||||
from app.database import AsyncSessionLocal
|
||||
from app.models.db import User
|
||||
|
||||
|
||||
async def reset_password():
|
||||
# Pre-generated bcrypt hash for 'cosmo'
|
||||
new_hash = '$2b$12$42d8/NAaYJlK8w/1yBd5uegdHlDkpC9XFtXYu2sWq0EXj48KAMZ0i'
|
||||
|
||||
async with AsyncSessionLocal() as session:
|
||||
# Find admin user
|
||||
result = await session.execute(
|
||||
select(User).where(User.username == 'cosmo')
|
||||
)
|
||||
user = result.scalar_one_or_none()
|
||||
|
||||
if not user:
|
||||
print("❌ Admin user 'cosmo' not found!")
|
||||
return
|
||||
|
||||
print(f"Found user: {user.username}")
|
||||
print(f"New password hash: {new_hash[:50]}...")
|
||||
|
||||
# Update password
|
||||
await session.execute(
|
||||
update(User)
|
||||
.where(User.username == 'cosmo')
|
||||
.values(password_hash=new_hash)
|
||||
)
|
||||
await session.commit()
|
||||
|
||||
print("✅ Admin password reset successfully!")
|
||||
print("Username: cosmo")
|
||||
print("Password: cosmo")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(reset_password())
|
||||
|
|
@ -0,0 +1,217 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Seed initial admin user, roles, and menus
|
||||
|
||||
Creates:
|
||||
1. Two roles: admin and user
|
||||
2. Admin user: cosmo / cosmo
|
||||
3. Admin menu structure
|
||||
|
||||
Usage:
|
||||
python scripts/seed_admin.py
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from sqlalchemy import select
|
||||
from app.database import AsyncSessionLocal
|
||||
from app.models.db import User, Role, Menu, RoleMenu
|
||||
import bcrypt
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def hash_password(password: str) -> str:
|
||||
"""Hash password using bcrypt"""
|
||||
return bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||
|
||||
|
||||
async def main():
|
||||
"""Seed admin data"""
|
||||
async with AsyncSessionLocal() as session:
|
||||
try:
|
||||
# 1. Create roles
|
||||
logger.info("Creating roles...")
|
||||
|
||||
# Check if roles already exist
|
||||
result = await session.execute(select(Role))
|
||||
existing_roles = result.scalars().all()
|
||||
|
||||
if existing_roles:
|
||||
logger.info(f"Roles already exist: {[r.name for r in existing_roles]}")
|
||||
admin_role = next((r for r in existing_roles if r.name == 'admin'), None)
|
||||
user_role = next((r for r in existing_roles if r.name == 'user'), None)
|
||||
else:
|
||||
admin_role = Role(
|
||||
name='admin',
|
||||
display_name='管理员',
|
||||
description='系统管理员,拥有所有权限'
|
||||
)
|
||||
user_role = Role(
|
||||
name='user',
|
||||
display_name='普通用户',
|
||||
description='普通用户,仅有基本访问权限'
|
||||
)
|
||||
session.add(admin_role)
|
||||
session.add(user_role)
|
||||
await session.flush()
|
||||
logger.info(f"✓ Created roles: admin, user")
|
||||
|
||||
# 2. Create admin user
|
||||
logger.info("Creating admin user...")
|
||||
|
||||
# Check if admin user already exists
|
||||
result = await session.execute(
|
||||
select(User).where(User.username == 'cosmo')
|
||||
)
|
||||
existing_user = result.scalar_one_or_none()
|
||||
|
||||
if existing_user:
|
||||
logger.info(f"Admin user 'cosmo' already exists (id={existing_user.id})")
|
||||
admin_user = existing_user
|
||||
else:
|
||||
admin_user = User(
|
||||
username='cosmo',
|
||||
password_hash=hash_password('cosmo'),
|
||||
email='admin@cosmo.com',
|
||||
full_name='Cosmo Administrator',
|
||||
is_active=True
|
||||
)
|
||||
session.add(admin_user)
|
||||
await session.flush()
|
||||
|
||||
# Assign admin role to user using direct insert to avoid lazy loading
|
||||
from app.models.db.user import user_roles
|
||||
await session.execute(
|
||||
user_roles.insert().values(
|
||||
user_id=admin_user.id,
|
||||
role_id=admin_role.id
|
||||
)
|
||||
)
|
||||
await session.flush()
|
||||
|
||||
logger.info(f"✓ Created admin user: cosmo / cosmo")
|
||||
|
||||
# 3. Create admin menus
|
||||
logger.info("Creating admin menus...")
|
||||
|
||||
# Check if menus already exist
|
||||
result = await session.execute(select(Menu))
|
||||
existing_menus = result.scalars().all()
|
||||
|
||||
if existing_menus:
|
||||
logger.info(f"Menus already exist ({len(existing_menus)} menus)")
|
||||
else:
|
||||
# Root menu items
|
||||
dashboard_menu = Menu(
|
||||
name='dashboard',
|
||||
title='控制台',
|
||||
icon='dashboard',
|
||||
path='/admin/dashboard',
|
||||
component='admin/Dashboard',
|
||||
sort_order=1,
|
||||
is_active=True,
|
||||
description='系统控制台'
|
||||
)
|
||||
|
||||
data_management_menu = Menu(
|
||||
name='data_management',
|
||||
title='数据管理',
|
||||
icon='database',
|
||||
path=None, # Parent menu, no direct path
|
||||
component=None,
|
||||
sort_order=2,
|
||||
is_active=True,
|
||||
description='数据管理模块'
|
||||
)
|
||||
|
||||
session.add(dashboard_menu)
|
||||
session.add(data_management_menu)
|
||||
await session.flush()
|
||||
|
||||
# Sub-menu items under data_management
|
||||
celestial_bodies_menu = Menu(
|
||||
parent_id=data_management_menu.id,
|
||||
name='celestial_bodies',
|
||||
title='天体数据列表',
|
||||
icon='planet',
|
||||
path='/admin/celestial-bodies',
|
||||
component='admin/CelestialBodies',
|
||||
sort_order=1,
|
||||
is_active=True,
|
||||
description='查看和管理天体数据'
|
||||
)
|
||||
|
||||
static_data_menu = Menu(
|
||||
parent_id=data_management_menu.id,
|
||||
name='static_data',
|
||||
title='静态数据列表',
|
||||
icon='data',
|
||||
path='/admin/static-data',
|
||||
component='admin/StaticData',
|
||||
sort_order=2,
|
||||
is_active=True,
|
||||
description='查看和管理静态数据(星座、星系等)'
|
||||
)
|
||||
|
||||
nasa_data_menu = Menu(
|
||||
parent_id=data_management_menu.id,
|
||||
name='nasa_data',
|
||||
title='NASA数据下载管理',
|
||||
icon='download',
|
||||
path='/admin/nasa-data',
|
||||
component='admin/NasaData',
|
||||
sort_order=3,
|
||||
is_active=True,
|
||||
description='管理NASA Horizons数据下载'
|
||||
)
|
||||
|
||||
session.add(celestial_bodies_menu)
|
||||
session.add(static_data_menu)
|
||||
session.add(nasa_data_menu)
|
||||
await session.flush()
|
||||
|
||||
logger.info(f"✓ Created {5} menu items")
|
||||
|
||||
# 4. Assign all menus to admin role
|
||||
logger.info("Assigning menus to admin role...")
|
||||
all_menus = [
|
||||
dashboard_menu,
|
||||
data_management_menu,
|
||||
celestial_bodies_menu,
|
||||
static_data_menu,
|
||||
nasa_data_menu
|
||||
]
|
||||
|
||||
for menu in all_menus:
|
||||
role_menu = RoleMenu(role_id=admin_role.id, menu_id=menu.id)
|
||||
session.add(role_menu)
|
||||
|
||||
await session.flush()
|
||||
logger.info(f"✓ Assigned {len(all_menus)} menus to admin role")
|
||||
|
||||
await session.commit()
|
||||
|
||||
logger.info("\n" + "=" * 60)
|
||||
logger.info("Admin data seeded successfully!")
|
||||
logger.info("=" * 60)
|
||||
logger.info("Admin credentials:")
|
||||
logger.info(" Username: cosmo")
|
||||
logger.info(" Password: cosmo")
|
||||
logger.info("=" * 60)
|
||||
|
||||
except Exception as e:
|
||||
await session.rollback()
|
||||
logger.error(f"Error seeding admin data: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,193 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Seed celestial bodies script
|
||||
|
||||
Adds all celestial bodies from CELESTIAL_BODIES to the database
|
||||
and fetches their current positions from NASA Horizons.
|
||||
|
||||
Usage:
|
||||
python scripts/seed_celestial_bodies.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
|
||||
# Add backend to path
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
|
||||
|
||||
from app.database import get_db
|
||||
from app.services.horizons import horizons_service
|
||||
from app.services.db_service import celestial_body_service, position_service
|
||||
from app.models.celestial import CELESTIAL_BODIES
|
||||
|
||||
|
||||
async def seed_bodies():
|
||||
"""Seed celestial bodies into database"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🌌 Seeding Celestial Bodies")
|
||||
print("=" * 60)
|
||||
|
||||
async for session in get_db():
|
||||
success_count = 0
|
||||
skip_count = 0
|
||||
error_count = 0
|
||||
|
||||
total = len(CELESTIAL_BODIES)
|
||||
|
||||
for idx, (body_id, info) in enumerate(CELESTIAL_BODIES.items(), 1):
|
||||
body_name = info["name"]
|
||||
|
||||
try:
|
||||
# Check if body already exists
|
||||
existing_body = await celestial_body_service.get_body_by_id(body_id, session)
|
||||
|
||||
if existing_body:
|
||||
print(f" [{idx}/{total}] ⏭️ {body_name:20s} - Already exists")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
print(f" [{idx}/{total}] 🔄 {body_name:20s} - Creating...", end='', flush=True)
|
||||
|
||||
# Create body record
|
||||
body_data = {
|
||||
"id": body_id,
|
||||
"name": info["name"],
|
||||
"name_zh": info.get("name_zh"),
|
||||
"type": info["type"],
|
||||
"description": info.get("description"),
|
||||
"extra_data": {
|
||||
"launch_date": info.get("launch_date"),
|
||||
"status": info.get("status"),
|
||||
}
|
||||
}
|
||||
|
||||
await celestial_body_service.create_body(body_data, session)
|
||||
print(f" ✅ Created", flush=True)
|
||||
success_count += 1
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Error: {str(e)}", flush=True)
|
||||
error_count += 1
|
||||
continue
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"📊 Summary:")
|
||||
print(f" ✅ Created: {success_count}")
|
||||
print(f" ⏭️ Skipped: {skip_count}")
|
||||
print(f" ❌ Errors: {error_count}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
break
|
||||
|
||||
|
||||
async def sync_current_positions():
|
||||
"""Fetch and store current positions for all bodies"""
|
||||
print("\n" + "=" * 60)
|
||||
print("📍 Syncing Current Positions")
|
||||
print("=" * 60)
|
||||
|
||||
async for session in get_db():
|
||||
now = datetime.utcnow()
|
||||
success_count = 0
|
||||
skip_count = 0
|
||||
error_count = 0
|
||||
|
||||
all_bodies = await celestial_body_service.get_all_bodies(session)
|
||||
total = len(all_bodies)
|
||||
|
||||
for idx, body in enumerate(all_bodies, 1):
|
||||
body_id = body.id
|
||||
body_name = body.name
|
||||
|
||||
try:
|
||||
# Check if we have recent position (within last hour)
|
||||
from datetime import timedelta
|
||||
recent_time = now - timedelta(hours=1)
|
||||
existing_positions = await position_service.get_positions(
|
||||
body_id, recent_time, now, session
|
||||
)
|
||||
|
||||
if existing_positions and len(existing_positions) > 0:
|
||||
print(f" [{idx}/{total}] ⏭️ {body_name:20s} - Recent data exists")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
print(f" [{idx}/{total}] 🔄 {body_name:20s} - Fetching...", end='', flush=True)
|
||||
|
||||
# Special handling for Sun
|
||||
if body_id == "10":
|
||||
positions_data = [{"time": now, "x": 0.0, "y": 0.0, "z": 0.0}]
|
||||
# Special handling for Cassini
|
||||
elif body_id == "-82":
|
||||
cassini_date = datetime(2017, 9, 15, 11, 58, 0)
|
||||
positions_data = horizons_service.get_body_positions(
|
||||
body_id, cassini_date, cassini_date
|
||||
)
|
||||
positions_data = [
|
||||
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
|
||||
for p in positions_data
|
||||
]
|
||||
else:
|
||||
# Query current position
|
||||
positions_data = horizons_service.get_body_positions(
|
||||
body_id, now, now
|
||||
)
|
||||
positions_data = [
|
||||
{"time": p.time, "x": p.x, "y": p.y, "z": p.z}
|
||||
for p in positions_data
|
||||
]
|
||||
|
||||
# Store positions
|
||||
for pos_data in positions_data:
|
||||
await position_service.save_position(
|
||||
body_id=body_id,
|
||||
time=pos_data["time"],
|
||||
x=pos_data["x"],
|
||||
y=pos_data["y"],
|
||||
z=pos_data["z"],
|
||||
source="nasa_horizons",
|
||||
session=session,
|
||||
)
|
||||
|
||||
print(f" ✅ Saved {len(positions_data)} position(s)", flush=True)
|
||||
success_count += 1
|
||||
|
||||
# Small delay to avoid overwhelming NASA API
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Error: {str(e)}", flush=True)
|
||||
error_count += 1
|
||||
continue
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"📊 Summary:")
|
||||
print(f" ✅ Success: {success_count}")
|
||||
print(f" ⏭️ Skipped: {skip_count}")
|
||||
print(f" ❌ Errors: {error_count}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
break
|
||||
|
||||
|
||||
async def main():
|
||||
print("\n🚀 Celestial Bodies Database Seeding")
|
||||
print("=" * 60)
|
||||
print("This script will:")
|
||||
print(" 1. Add all celestial bodies to the database")
|
||||
print(" 2. Fetch and store their current positions")
|
||||
print("=" * 60)
|
||||
|
||||
# Seed celestial bodies
|
||||
await seed_bodies()
|
||||
|
||||
# Sync current positions
|
||||
await sync_current_positions()
|
||||
|
||||
print("\n🎉 Seeding complete!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,221 @@
|
|||
#!/bin/bash
|
||||
# Cosmo 后端一键初始化脚本
|
||||
|
||||
set -e # 遇到错误立即退出
|
||||
|
||||
# 颜色定义
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# 日志函数
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# 打印标题
|
||||
print_header() {
|
||||
echo "================================================================="
|
||||
echo " Cosmo 后端初始化脚本"
|
||||
echo "================================================================="
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 检查 Python
|
||||
check_python() {
|
||||
log_info "检查 Python 环境..."
|
||||
if ! command -v python3 &> /dev/null; then
|
||||
log_error "未找到 Python 3,请先安装 Python 3.9+"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
PYTHON_VERSION=$(python3 --version | awk '{print $2}')
|
||||
log_success "Python 版本: $PYTHON_VERSION"
|
||||
}
|
||||
|
||||
# 检查 PostgreSQL
|
||||
check_postgresql() {
|
||||
log_info "检查 PostgreSQL..."
|
||||
if ! command -v psql &> /dev/null; then
|
||||
log_error "未找到 psql 命令,请先安装 PostgreSQL"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 尝试连接 PostgreSQL
|
||||
if psql -U postgres -c "SELECT version();" &> /dev/null; then
|
||||
log_success "PostgreSQL 连接成功"
|
||||
else
|
||||
log_error "无法连接到 PostgreSQL,请检查:"
|
||||
log_error " 1. PostgreSQL 是否正在运行"
|
||||
log_error " 2. 账号密码是否为 postgres/postgres"
|
||||
log_error " 3. 是否允许本地连接"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查 Redis
|
||||
check_redis() {
|
||||
log_info "检查 Redis..."
|
||||
if ! command -v redis-cli &> /dev/null; then
|
||||
log_warning "未找到 redis-cli 命令"
|
||||
log_warning "Redis 是可选的,但建议安装以获得更好的缓存性能"
|
||||
return
|
||||
fi
|
||||
|
||||
# 尝试连接 Redis
|
||||
if redis-cli ping &> /dev/null; then
|
||||
log_success "Redis 连接成功"
|
||||
else
|
||||
log_warning "无法连接到 Redis"
|
||||
log_warning "应用会自动降级为仅使用内存缓存"
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查依赖
|
||||
check_dependencies() {
|
||||
log_info "检查 Python 依赖包..."
|
||||
|
||||
cd "$(dirname "$0")/.." # 切换到 backend 目录
|
||||
|
||||
# 检查 requirements.txt 是否存在
|
||||
if [ ! -f "requirements.txt" ]; then
|
||||
log_error "未找到 requirements.txt 文件"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 检查关键依赖是否已安装
|
||||
if ! python3 -c "import fastapi" &> /dev/null; then
|
||||
log_warning "依赖包未完全安装,正在安装..."
|
||||
pip install -r requirements.txt
|
||||
log_success "依赖包安装完成"
|
||||
else
|
||||
log_success "依赖包已安装"
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查 .env 文件
|
||||
check_env_file() {
|
||||
log_info "检查配置文件..."
|
||||
|
||||
cd "$(dirname "$0")/.." # 确保在 backend 目录
|
||||
|
||||
if [ ! -f ".env" ]; then
|
||||
log_warning ".env 文件不存在,从 .env.example 创建..."
|
||||
if [ -f ".env.example" ]; then
|
||||
cp .env.example .env
|
||||
log_success ".env 文件创建成功"
|
||||
else
|
||||
log_error "未找到 .env.example 文件"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_success ".env 文件已存在"
|
||||
fi
|
||||
}
|
||||
|
||||
# 创建数据库
|
||||
create_database() {
|
||||
log_info "创建数据库..."
|
||||
|
||||
cd "$(dirname "$0")/.." # 确保在 backend 目录
|
||||
|
||||
if python3 scripts/create_db.py; then
|
||||
log_success "数据库创建完成"
|
||||
else
|
||||
log_error "数据库创建失败"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 初始化数据库表
|
||||
init_database() {
|
||||
log_info "初始化数据库表结构..."
|
||||
|
||||
cd "$(dirname "$0")/.." # 确保在 backend 目录
|
||||
|
||||
if python3 scripts/init_db.py; then
|
||||
log_success "数据库表结构初始化完成"
|
||||
else
|
||||
log_error "数据库表结构初始化失败"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 创建上传目录
|
||||
create_upload_dir() {
|
||||
log_info "创建上传目录..."
|
||||
|
||||
cd "$(dirname "$0")/.." # 确保在 backend 目录
|
||||
|
||||
if [ ! -d "upload" ]; then
|
||||
mkdir -p upload
|
||||
log_success "上传目录创建成功: upload/"
|
||||
else
|
||||
log_success "上传目录已存在: upload/"
|
||||
fi
|
||||
}
|
||||
|
||||
# 打印完成信息
|
||||
print_completion() {
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
echo -e "${GREEN} ✓ 初始化完成!${NC}"
|
||||
echo "================================================================="
|
||||
echo ""
|
||||
echo "启动服务:"
|
||||
echo " cd backend"
|
||||
echo " python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000"
|
||||
echo ""
|
||||
echo "或者:"
|
||||
echo " python app/main.py"
|
||||
echo ""
|
||||
echo "访问:"
|
||||
echo " - API 文档: http://localhost:8000/docs"
|
||||
echo " - 健康检查: http://localhost:8000/health"
|
||||
echo " - 根路径: http://localhost:8000/"
|
||||
echo ""
|
||||
echo "================================================================="
|
||||
}
|
||||
|
||||
# 主函数
|
||||
main() {
|
||||
print_header
|
||||
|
||||
# 1. 检查环境
|
||||
check_python
|
||||
check_postgresql
|
||||
check_redis
|
||||
|
||||
# 2. 安装依赖
|
||||
check_dependencies
|
||||
|
||||
# 3. 配置文件
|
||||
check_env_file
|
||||
|
||||
# 4. 数据库初始化
|
||||
create_database
|
||||
init_database
|
||||
|
||||
# 5. 创建必要目录
|
||||
create_upload_dir
|
||||
|
||||
# 6. 完成
|
||||
print_completion
|
||||
}
|
||||
|
||||
# 执行主函数
|
||||
main
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
"""
|
||||
Test fetching Pluto position from NASA Horizons
|
||||
"""
|
||||
import asyncio
|
||||
from datetime import datetime, UTC
|
||||
from app.services.horizons import HorizonsService
|
||||
|
||||
|
||||
async def test_pluto():
|
||||
"""Test if we can fetch Pluto's position"""
|
||||
print("🔍 Testing Pluto position fetch from NASA Horizons API...")
|
||||
|
||||
horizons = HorizonsService()
|
||||
|
||||
try:
|
||||
# Fetch current position for Pluto (ID: 999)
|
||||
now = datetime.now(UTC)
|
||||
positions = horizons.get_body_positions(
|
||||
body_id="999",
|
||||
start_time=now,
|
||||
end_time=now,
|
||||
step="1d"
|
||||
)
|
||||
|
||||
if positions:
|
||||
print(f"\n✅ Successfully fetched Pluto position!")
|
||||
print(f" Time: {positions[0].time}")
|
||||
print(f" Position (AU):")
|
||||
print(f" X: {positions[0].x:.4f}")
|
||||
print(f" Y: {positions[0].y:.4f}")
|
||||
print(f" Z: {positions[0].z:.4f}")
|
||||
|
||||
# Calculate distance from Sun
|
||||
import math
|
||||
distance = math.sqrt(
|
||||
positions[0].x**2 +
|
||||
positions[0].y**2 +
|
||||
positions[0].z**2
|
||||
)
|
||||
print(f" Distance from Sun: {distance:.2f} AU")
|
||||
else:
|
||||
print("❌ No position data returned")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error fetching Pluto position: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_pluto())
|
||||
|
|
@ -0,0 +1,623 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Update static_data table with expanded astronomical data
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from app.database import get_db
|
||||
from app.services.db_service import static_data_service
|
||||
from app.models.db import StaticData
|
||||
from sqlalchemy import select, update, insert
|
||||
from sqlalchemy.dialects.postgresql import insert as pg_insert
|
||||
|
||||
|
||||
# Expanded constellation data (15 constellations)
|
||||
CONSTELLATIONS = [
|
||||
{
|
||||
"name": "Orion",
|
||||
"name_zh": "猎户座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Betelgeuse", "ra": 88.79, "dec": 7.41},
|
||||
{"name": "Bellatrix", "ra": 81.28, "dec": 6.35},
|
||||
{"name": "Alnitak", "ra": 85.19, "dec": -1.94},
|
||||
{"name": "Alnilam", "ra": 84.05, "dec": -1.20},
|
||||
{"name": "Mintaka", "ra": 83.00, "dec": -0.30},
|
||||
{"name": "Saiph", "ra": 86.94, "dec": -9.67},
|
||||
{"name": "Rigel", "ra": 78.63, "dec": -8.20}
|
||||
],
|
||||
"lines": [[0, 1], [1, 2], [2, 3], [3, 4], [2, 5], [5, 6]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Ursa Major",
|
||||
"name_zh": "大熊座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Dubhe", "ra": 165.93, "dec": 61.75},
|
||||
{"name": "Merak", "ra": 165.46, "dec": 56.38},
|
||||
{"name": "Phecda", "ra": 178.46, "dec": 53.69},
|
||||
{"name": "Megrez", "ra": 183.86, "dec": 57.03},
|
||||
{"name": "Alioth", "ra": 193.51, "dec": 55.96},
|
||||
{"name": "Mizar", "ra": 200.98, "dec": 54.93},
|
||||
{"name": "Alkaid", "ra": 206.89, "dec": 49.31}
|
||||
],
|
||||
"lines": [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Cassiopeia",
|
||||
"name_zh": "仙后座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Caph", "ra": 2.29, "dec": 59.15},
|
||||
{"name": "Schedar", "ra": 10.13, "dec": 56.54},
|
||||
{"name": "Navi", "ra": 14.18, "dec": 60.72},
|
||||
{"name": "Ruchbah", "ra": 21.45, "dec": 60.24},
|
||||
{"name": "Segin", "ra": 25.65, "dec": 63.67}
|
||||
],
|
||||
"lines": [[0, 1], [1, 2], [2, 3], [3, 4]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Leo",
|
||||
"name_zh": "狮子座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Regulus", "ra": 152.09, "dec": 11.97},
|
||||
{"name": "Denebola", "ra": 177.26, "dec": 14.57},
|
||||
{"name": "Algieba", "ra": 154.99, "dec": 19.84},
|
||||
{"name": "Zosma", "ra": 168.53, "dec": 20.52},
|
||||
{"name": "Chertan", "ra": 173.95, "dec": 15.43}
|
||||
],
|
||||
"lines": [[0, 2], [2, 3], [3, 4], [4, 1], [1, 0]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Scorpius",
|
||||
"name_zh": "天蝎座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Antares", "ra": 247.35, "dec": -26.43},
|
||||
{"name": "Shaula", "ra": 263.40, "dec": -37.10},
|
||||
{"name": "Sargas", "ra": 264.33, "dec": -43.00},
|
||||
{"name": "Dschubba", "ra": 240.08, "dec": -22.62},
|
||||
{"name": "Lesath", "ra": 262.69, "dec": -37.29}
|
||||
],
|
||||
"lines": [[3, 0], [0, 1], [1, 4], [1, 2]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Cygnus",
|
||||
"name_zh": "天鹅座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Deneb", "ra": 310.36, "dec": 45.28},
|
||||
{"name": "Sadr", "ra": 305.56, "dec": 40.26},
|
||||
{"name": "Albireo", "ra": 292.68, "dec": 27.96},
|
||||
{"name": "Delta Cygni", "ra": 296.24, "dec": 45.13},
|
||||
{"name": "Gienah", "ra": 314.29, "dec": 33.97}
|
||||
],
|
||||
"lines": [[0, 1], [1, 2], [1, 3], [1, 4]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Aquila",
|
||||
"name_zh": "天鹰座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Altair", "ra": 297.70, "dec": 8.87},
|
||||
{"name": "Tarazed", "ra": 296.56, "dec": 10.61},
|
||||
{"name": "Alshain", "ra": 298.83, "dec": 6.41},
|
||||
{"name": "Deneb el Okab", "ra": 304.48, "dec": 15.07}
|
||||
],
|
||||
"lines": [[1, 0], [0, 2], [0, 3]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Lyra",
|
||||
"name_zh": "天琴座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Vega", "ra": 279.23, "dec": 38.78},
|
||||
{"name": "Sheliak", "ra": 282.52, "dec": 33.36},
|
||||
{"name": "Sulafat", "ra": 284.74, "dec": 32.69},
|
||||
{"name": "Delta Lyrae", "ra": 283.82, "dec": 36.90}
|
||||
],
|
||||
"lines": [[0, 3], [3, 1], [1, 2], [2, 0]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Pegasus",
|
||||
"name_zh": "飞马座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Markab", "ra": 346.19, "dec": 15.21},
|
||||
{"name": "Scheat", "ra": 345.94, "dec": 28.08},
|
||||
{"name": "Algenib", "ra": 3.31, "dec": 15.18},
|
||||
{"name": "Enif", "ra": 326.05, "dec": 9.88}
|
||||
],
|
||||
"lines": [[0, 1], [1, 2], [2, 0], [0, 3]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Andromeda",
|
||||
"name_zh": "仙女座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Alpheratz", "ra": 2.10, "dec": 29.09},
|
||||
{"name": "Mirach", "ra": 17.43, "dec": 35.62},
|
||||
{"name": "Almach", "ra": 30.97, "dec": 42.33},
|
||||
{"name": "Delta Andromedae", "ra": 8.78, "dec": 30.86}
|
||||
],
|
||||
"lines": [[0, 3], [3, 1], [1, 2]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Taurus",
|
||||
"name_zh": "金牛座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Aldebaran", "ra": 68.98, "dec": 16.51},
|
||||
{"name": "Elnath", "ra": 81.57, "dec": 28.61},
|
||||
{"name": "Alcyone", "ra": 56.87, "dec": 24.11},
|
||||
{"name": "Zeta Tauri", "ra": 84.41, "dec": 21.14}
|
||||
],
|
||||
"lines": [[0, 1], [0, 2], [1, 3]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Gemini",
|
||||
"name_zh": "双子座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Pollux", "ra": 116.33, "dec": 28.03},
|
||||
{"name": "Castor", "ra": 113.65, "dec": 31.89},
|
||||
{"name": "Alhena", "ra": 99.43, "dec": 16.40},
|
||||
{"name": "Mebsuta", "ra": 100.98, "dec": 25.13}
|
||||
],
|
||||
"lines": [[0, 1], [0, 2], [1, 3], [3, 2]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Virgo",
|
||||
"name_zh": "室女座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Spica", "ra": 201.30, "dec": -11.16},
|
||||
{"name": "Porrima", "ra": 190.42, "dec": 1.76},
|
||||
{"name": "Vindemiatrix", "ra": 195.54, "dec": 10.96},
|
||||
{"name": "Heze", "ra": 211.67, "dec": -0.67}
|
||||
],
|
||||
"lines": [[2, 1], [1, 0], [0, 3]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Sagittarius",
|
||||
"name_zh": "人马座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Kaus Australis", "ra": 276.04, "dec": -34.38},
|
||||
{"name": "Nunki", "ra": 283.82, "dec": -26.30},
|
||||
{"name": "Ascella", "ra": 290.97, "dec": -29.88},
|
||||
{"name": "Kaus Media", "ra": 276.99, "dec": -29.83},
|
||||
{"name": "Kaus Borealis", "ra": 279.23, "dec": -25.42}
|
||||
],
|
||||
"lines": [[0, 3], [3, 4], [4, 1], [1, 2]]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Capricornus",
|
||||
"name_zh": "摩羯座",
|
||||
"data": {
|
||||
"stars": [
|
||||
{"name": "Deneb Algedi", "ra": 326.76, "dec": -16.13},
|
||||
{"name": "Dabih", "ra": 305.25, "dec": -14.78},
|
||||
{"name": "Nashira", "ra": 325.02, "dec": -16.66},
|
||||
{"name": "Algedi", "ra": 304.51, "dec": -12.51}
|
||||
],
|
||||
"lines": [[3, 1], [1, 2], [2, 0]]
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
# Expanded galaxy data (12 galaxies)
|
||||
GALAXIES = [
|
||||
{
|
||||
"name": "Andromeda Galaxy",
|
||||
"name_zh": "仙女座星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 2.537,
|
||||
"ra": 10.68,
|
||||
"dec": 41.27,
|
||||
"magnitude": 3.44,
|
||||
"diameter_kly": 220,
|
||||
"color": "#CCDDFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Triangulum Galaxy",
|
||||
"name_zh": "三角座星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 2.73,
|
||||
"ra": 23.46,
|
||||
"dec": 30.66,
|
||||
"magnitude": 5.72,
|
||||
"diameter_kly": 60,
|
||||
"color": "#AACCEE"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Large Magellanic Cloud",
|
||||
"name_zh": "大麦哲伦云",
|
||||
"data": {
|
||||
"type": "irregular",
|
||||
"distance_mly": 0.163,
|
||||
"ra": 80.89,
|
||||
"dec": -69.76,
|
||||
"magnitude": 0.9,
|
||||
"diameter_kly": 14,
|
||||
"color": "#DDCCFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Small Magellanic Cloud",
|
||||
"name_zh": "小麦哲伦云",
|
||||
"data": {
|
||||
"type": "irregular",
|
||||
"distance_mly": 0.197,
|
||||
"ra": 12.80,
|
||||
"dec": -73.15,
|
||||
"magnitude": 2.7,
|
||||
"diameter_kly": 7,
|
||||
"color": "#CCBBEE"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Milky Way Center",
|
||||
"name_zh": "银河系中心",
|
||||
"data": {
|
||||
"type": "galactic_center",
|
||||
"distance_mly": 0.026,
|
||||
"ra": 266.42,
|
||||
"dec": -29.01,
|
||||
"magnitude": -1,
|
||||
"diameter_kly": 100,
|
||||
"color": "#FFFFAA"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Whirlpool Galaxy",
|
||||
"name_zh": "漩涡星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 23,
|
||||
"ra": 202.47,
|
||||
"dec": 47.20,
|
||||
"magnitude": 8.4,
|
||||
"diameter_kly": 76,
|
||||
"color": "#AADDFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Sombrero Galaxy",
|
||||
"name_zh": "草帽星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 29.3,
|
||||
"ra": 189.99,
|
||||
"dec": -11.62,
|
||||
"magnitude": 8.0,
|
||||
"diameter_kly": 50,
|
||||
"color": "#FFDDAA"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Pinwheel Galaxy",
|
||||
"name_zh": "风车星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 21,
|
||||
"ra": 210.80,
|
||||
"dec": 54.35,
|
||||
"magnitude": 7.9,
|
||||
"diameter_kly": 170,
|
||||
"color": "#BBDDFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Bode's Galaxy",
|
||||
"name_zh": "波德星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 11.8,
|
||||
"ra": 148.97,
|
||||
"dec": 69.07,
|
||||
"magnitude": 6.9,
|
||||
"diameter_kly": 90,
|
||||
"color": "#CCDDFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Cigar Galaxy",
|
||||
"name_zh": "雪茄星系",
|
||||
"data": {
|
||||
"type": "starburst",
|
||||
"distance_mly": 11.5,
|
||||
"ra": 148.97,
|
||||
"dec": 69.68,
|
||||
"magnitude": 8.4,
|
||||
"diameter_kly": 37,
|
||||
"color": "#FFCCAA"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Centaurus A",
|
||||
"name_zh": "半人马座A",
|
||||
"data": {
|
||||
"type": "elliptical",
|
||||
"distance_mly": 13.7,
|
||||
"ra": 201.37,
|
||||
"dec": -43.02,
|
||||
"magnitude": 6.8,
|
||||
"diameter_kly": 60,
|
||||
"color": "#FFDDCC"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Sculptor Galaxy",
|
||||
"name_zh": "玉夫座星系",
|
||||
"data": {
|
||||
"type": "spiral",
|
||||
"distance_mly": 11.4,
|
||||
"ra": 15.15,
|
||||
"dec": -25.29,
|
||||
"magnitude": 7.2,
|
||||
"diameter_kly": 90,
|
||||
"color": "#CCDDEE"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
# Nebula data (12 nebulae)
|
||||
NEBULAE = [
|
||||
{
|
||||
"name": "Orion Nebula",
|
||||
"name_zh": "猎户座大星云",
|
||||
"data": {
|
||||
"type": "emission",
|
||||
"distance_ly": 1344,
|
||||
"ra": 83.82,
|
||||
"dec": -5.39,
|
||||
"magnitude": 4.0,
|
||||
"diameter_ly": 24,
|
||||
"color": "#FF6B9D"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Eagle Nebula",
|
||||
"name_zh": "鹰状星云",
|
||||
"data": {
|
||||
"type": "emission",
|
||||
"distance_ly": 7000,
|
||||
"ra": 274.70,
|
||||
"dec": -13.80,
|
||||
"magnitude": 6.0,
|
||||
"diameter_ly": 70,
|
||||
"color": "#FF8B7D"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Crab Nebula",
|
||||
"name_zh": "蟹状星云",
|
||||
"data": {
|
||||
"type": "supernova_remnant",
|
||||
"distance_ly": 6500,
|
||||
"ra": 83.63,
|
||||
"dec": 22.01,
|
||||
"magnitude": 8.4,
|
||||
"diameter_ly": 11,
|
||||
"color": "#FFAA66"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Ring Nebula",
|
||||
"name_zh": "环状星云",
|
||||
"data": {
|
||||
"type": "planetary",
|
||||
"distance_ly": 2300,
|
||||
"ra": 283.40,
|
||||
"dec": 33.03,
|
||||
"magnitude": 8.8,
|
||||
"diameter_ly": 1,
|
||||
"color": "#66DDFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Helix Nebula",
|
||||
"name_zh": "螺旋星云",
|
||||
"data": {
|
||||
"type": "planetary",
|
||||
"distance_ly": 700,
|
||||
"ra": 337.41,
|
||||
"dec": -20.84,
|
||||
"magnitude": 7.6,
|
||||
"diameter_ly": 2.5,
|
||||
"color": "#88CCFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Lagoon Nebula",
|
||||
"name_zh": "礁湖星云",
|
||||
"data": {
|
||||
"type": "emission",
|
||||
"distance_ly": 4100,
|
||||
"ra": 270.93,
|
||||
"dec": -24.38,
|
||||
"magnitude": 6.0,
|
||||
"diameter_ly": 55,
|
||||
"color": "#FF99AA"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Horsehead Nebula",
|
||||
"name_zh": "马头星云",
|
||||
"data": {
|
||||
"type": "dark",
|
||||
"distance_ly": 1500,
|
||||
"ra": 85.30,
|
||||
"dec": -2.46,
|
||||
"magnitude": 10.0,
|
||||
"diameter_ly": 3.5,
|
||||
"color": "#886655"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Eta Carinae Nebula",
|
||||
"name_zh": "船底座η星云",
|
||||
"data": {
|
||||
"type": "emission",
|
||||
"distance_ly": 7500,
|
||||
"ra": 161.26,
|
||||
"dec": -59.87,
|
||||
"magnitude": 3.0,
|
||||
"diameter_ly": 300,
|
||||
"color": "#FFAACC"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "North America Nebula",
|
||||
"name_zh": "北美洲星云",
|
||||
"data": {
|
||||
"type": "emission",
|
||||
"distance_ly": 1600,
|
||||
"ra": 312.95,
|
||||
"dec": 44.32,
|
||||
"magnitude": 4.0,
|
||||
"diameter_ly": 50,
|
||||
"color": "#FF7788"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Trifid Nebula",
|
||||
"name_zh": "三叶星云",
|
||||
"data": {
|
||||
"type": "emission",
|
||||
"distance_ly": 5200,
|
||||
"ra": 270.36,
|
||||
"dec": -23.03,
|
||||
"magnitude": 6.3,
|
||||
"diameter_ly": 25,
|
||||
"color": "#FF99DD"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Dumbbell Nebula",
|
||||
"name_zh": "哑铃星云",
|
||||
"data": {
|
||||
"type": "planetary",
|
||||
"distance_ly": 1360,
|
||||
"ra": 299.90,
|
||||
"dec": 22.72,
|
||||
"magnitude": 7.5,
|
||||
"diameter_ly": 1.44,
|
||||
"color": "#77DDFF"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Veil Nebula",
|
||||
"name_zh": "面纱星云",
|
||||
"data": {
|
||||
"type": "supernova_remnant",
|
||||
"distance_ly": 2400,
|
||||
"ra": 312.92,
|
||||
"dec": 30.72,
|
||||
"magnitude": 7.0,
|
||||
"diameter_ly": 110,
|
||||
"color": "#AADDFF"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
async def update_static_data():
|
||||
"""Update static_data table with expanded astronomical data"""
|
||||
print("=" * 60)
|
||||
print("Updating static_data table")
|
||||
print("=" * 60)
|
||||
|
||||
async for session in get_db():
|
||||
# Update constellations
|
||||
print(f"\nUpdating {len(CONSTELLATIONS)} constellations...")
|
||||
for const in CONSTELLATIONS:
|
||||
stmt = pg_insert(StaticData).values(
|
||||
category="constellation",
|
||||
name=const["name"],
|
||||
name_zh=const["name_zh"],
|
||||
data=const["data"]
|
||||
)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['category', 'name'],
|
||||
set_={
|
||||
'name_zh': const["name_zh"],
|
||||
'data': const["data"]
|
||||
}
|
||||
)
|
||||
await session.execute(stmt)
|
||||
print(f" ✓ {const['name']} ({const['name_zh']})")
|
||||
|
||||
# Update galaxies
|
||||
print(f"\nUpdating {len(GALAXIES)} galaxies...")
|
||||
for galaxy in GALAXIES:
|
||||
stmt = pg_insert(StaticData).values(
|
||||
category="galaxy",
|
||||
name=galaxy["name"],
|
||||
name_zh=galaxy["name_zh"],
|
||||
data=galaxy["data"]
|
||||
)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['category', 'name'],
|
||||
set_={
|
||||
'name_zh': galaxy["name_zh"],
|
||||
'data': galaxy["data"]
|
||||
}
|
||||
)
|
||||
await session.execute(stmt)
|
||||
print(f" ✓ {galaxy['name']} ({galaxy['name_zh']})")
|
||||
|
||||
# Insert nebulae
|
||||
print(f"\nInserting {len(NEBULAE)} nebulae...")
|
||||
for nebula in NEBULAE:
|
||||
stmt = pg_insert(StaticData).values(
|
||||
category="nebula",
|
||||
name=nebula["name"],
|
||||
name_zh=nebula["name_zh"],
|
||||
data=nebula["data"]
|
||||
)
|
||||
stmt = stmt.on_conflict_do_update(
|
||||
index_elements=['category', 'name'],
|
||||
set_={
|
||||
'name_zh': nebula["name_zh"],
|
||||
'data': nebula["data"]
|
||||
}
|
||||
)
|
||||
await session.execute(stmt)
|
||||
print(f" ✓ {nebula['name']} ({nebula['name_zh']})")
|
||||
|
||||
await session.commit()
|
||||
break # Only use first session
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("✓ Static data update complete!")
|
||||
print("=" * 60)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(update_static_data())
|
||||
|
After Width: | Height: | Size: 1.1 MiB |
|
After Width: | Height: | Size: 452 KiB |
|
After Width: | Height: | Size: 249 KiB |
|
After Width: | Height: | Size: 1.0 MiB |
|
After Width: | Height: | Size: 879 KiB |
|
After Width: | Height: | Size: 487 KiB |
|
After Width: | Height: | Size: 1.1 MiB |
|
After Width: | Height: | Size: 733 KiB |
|
After Width: | Height: | Size: 852 KiB |
|
After Width: | Height: | Size: 1.0 MiB |
|
After Width: | Height: | Size: 236 KiB |
|
After Width: | Height: | Size: 3.8 MiB |
|
After Width: | Height: | Size: 195 KiB |
|
After Width: | Height: | Size: 12 KiB |
|
After Width: | Height: | Size: 246 KiB |
|
After Width: | Height: | Size: 803 KiB |
|
After Width: | Height: | Size: 76 KiB |
|
After Width: | Height: | Size: 224 KiB |
|
After Width: | Height: | Size: 864 KiB |