cratedb写入数据
数据 写入
2023-09-27 14:20:53 时间
环境:
Python:3.6.5
Cratedb:4.5.1
写入程序
#!/usr/bin/env python
#coding=utf-8
from crate import client
import os, time, datetime
##client = Client(host='192.168.56.10',database='db_test',user='dbaadmin' ,password='123456')
##client = Client(host='192.168.56.10',database='db_test')
connection = client.connect("http://192.168.56.10:4200/", username="devtest", password="123456")
cursor = connection.cursor()
def insert_data_cratedb():
for i in range(1, 10):
str_i = str(i)
now_time=time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))
##insert_sql = "insert into user_local(id,name) values ('%s','%s')" % (i, "name" + str_i)
insert_sql = "insert into db_test.metric_local(app,block_qps,count,exception_qps,id,machine_ip,pass_qps,resource,resource_code,rt,success_qps,timestamp,gmt_modified,gmt_create) values ('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" % (
"app" + str_i, i, i, i, i, "machine_ip" + str_i, i, "resource" + str_i, i, i, i,now_time,now_time,now_time)
ans = cursor.execute(insert_sql)
if __name__ == '__main__':
print("开始时间:"+time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
l_flag = insert_data_cratedb()
print("结束时间:"+time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
表结构:
CREATE TABLE IF NOT EXISTS "db_test"."metric_local" (
"app" TEXT NOT NULL,
"block_qps" BIGINT NOT NULL,
"count" INTEGER NOT NULL,
"exception_qps" BIGINT NOT NULL,
"gmt_create" TIMESTAMP WITH TIME ZONE NOT NULL,
"gmt_modified" TIMESTAMP WITH TIME ZONE NOT NULL,
"id" BIGINT,
"machine_ip" TEXT,
"pass_qps" BIGINT NOT NULL,
"resource" TEXT NOT NULL,
"resource_code" INTEGER NOT NULL,
"rt" DOUBLE PRECISION NOT NULL,
"success_qps" BIGINT NOT NULL,
"timestamp" TIMESTAMP WITH TIME ZONE NOT NULL,
"month" TIMESTAMP WITH TIME ZONE GENERATED ALWAYS AS date_trunc('month', "timestamp")
)
CLUSTERED INTO 4 SHARDS
PARTITIONED BY ("month")
WITH (
"allocation.max_retries" = 5,
"blocks.metadata" = false,
"blocks.read" = false,
"blocks.read_only" = false,
"blocks.read_only_allow_delete" = false,
"blocks.write" = false,
codec = 'default',
column_policy = 'strict',
"mapping.total_fields.limit" = 1000,
max_ngram_diff = 1,
max_shingle_diff = 3,
number_of_replicas = '0-1',
refresh_interval = 1000,
"routing.allocation.enable" = 'all',
"routing.allocation.total_shards_per_node" = -1,
"store.type" = 'fs',
"translog.durability" = 'REQUEST',
"translog.flush_threshold_size" = 536870912,
"translog.sync_interval" = 5000,
"unassigned.node_left.delayed_timeout" = 60000,
"write.wait_for_active_shards" = '1'
)
相关文章
- 不懂Java代码,照样把jmeter指定数据写入execl
- Hudi-Flink SQL实时读取kafka数据写入Hudi表
- 微服务架构案例(03):数据库选型简介,业务数据规划设计
- Matlab中结构体数组中数据的组织方法
- 【wpf】 当用了数据模板之后如何获取控件的Item?
- 不懂Java代码,照样把jmeter指定数据写入execl
- [可视化]数据原来这样美
- DataHub: 现代数据栈的元数据平台--如何将数据血缘关系写入DataHub
- ElasticSearch第十五讲 ES数据写入过程和写入原理以及数据如何保证一致性
- 海量数据面试题总结,这下面试不慌了!
- 设置USB数据监听
- Wireshark数据抓包教程之Wireshark的基础知识
- Python数据可视化 Pyecharts 制作 Tree 树图
- Python基础必掌握的字符串和字符数据操作
- SwiftUI macOS教程之 01 数据持久化@AppStorage 从 UserDefaults 读取和写入值
- SQLite学习笔记之类和对象如何存储到数据中
- Sqlite 命令行导出、导入数据(直接支持CSV)
- 【FAQ】鸿蒙3.0无法读取系统日历数据,也无法写入新的日历数据
- Spotify正扼杀你的SSD寿命 1小时写入5到10GB数据
- 【大数据】Spark及SparkSQL数据倾斜现象和解决思路
- 顺丰菜鸟大战 本质是以数据获得企业竞争壁垒