apim_metrics作为分析,诊断日志,开启它非常有必要
- https://apim.docs.wso2.com/en/latest/monitoring/api-analytics/on-prem/datadog-installation-guide/#step-12-enabling-logs
- https://apim.docs.wso2.com/en/4.4.0/api-analytics/choreo-analytics/getting-started-guide/
开启诊断日志
- 开启metrics日志的支持,我们需要编辑log4j2.properties文件,它的位置为wso2am-4.x.x/repository/conf
- appenders = APIM_METRICS_APPENDER, .... (list of other available appenders)
复制代码
- 可以为elk开启一个APIM_METRICS_APPENDER的日志记录者,当然elk是最常用的日志收集和分析平台,你也可以对接到其它云平台
- appender.APIM_METRICS_APPENDER.type = RollingFile
- appender.APIM_METRICS_APPENDER.name = APIM_METRICS_APPENDER
- appender.APIM_METRICS_APPENDER.fileName = ${sys:carbon.home}/repository/logs/apim_metrics.log
- appender.APIM_METRICS_APPENDER.filePattern = ${sys:carbon.home}/repository/logs/apim_metrics-%d{MM-dd-yyyy}-%i.log
- appender.APIM_METRICS_APPENDER.layout.type = PatternLayout
- appender.APIM_METRICS_APPENDER.layout.pattern = %d{HH:mm:ss,SSS} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%n
- appender.APIM_METRICS_APPENDER.policies.type = Policies
- appender.APIM_METRICS_APPENDER.policies.time.type = TimeBasedTriggeringPolicy
- appender.APIM_METRICS_APPENDER.policies.time.interval = 1
- appender.APIM_METRICS_APPENDER.policies.time.modulate = true
- appender.APIM_METRICS_APPENDER.policies.size.type = SizeBasedTriggeringPolicy
- appender.APIM_METRICS_APPENDER.policies.size.size=1000MB
- appender.APIM_METRICS_APPENDER.strategy.type = DefaultRolloverStrategy
- appender.APIM_METRICS_APPENDER.strategy.max = 10
复制代码- loggers = reporter, ...(list of other available loggers)
复制代码- logger.reporter.name = org.wso2.am.analytics.publisher.reporter.elk
- logger.reporter.level = INFO
- logger.reporter.additivity = false
- logger.reporter.appenderRef.APIM_METRICS_APPENDER.ref = APIM_METRICS_APPENDER
复制代码The apim_metrics.log file be rolled each day or when the log size reaches the limit of 1000 MB by default. Note that only 10 revisions will be kept and older revisions will be deleted automatically. You can change these configurations by updating the configurations provided in step 2 of this section given above.
终端用户-》自己建立的应用-》wso2-api接口
- apiName api名称
- proxyResponseCode api后端服务返回状态码
- destination 网关上地址
- apiContext api的详细路径
- applicationId 应用的ID
- applicationName 应用名称
- userIp 访问者的ip地址
- {"apiName":"user-register","proxyResponseCode":200,"destination":"https://test.ddd.com/user-
- register","apiCreatorTenantDomain":"carbon.super","platform":"Other","apiMethod":"GET","apiVersion":"1.0.0","gatewayType":"SYNAPSE","apiCreator":"admin","responseCacheHit":false,"backendLatency":111,"correlationId":
- "0e5482a5-b281-4b91-a728-1b90f443110c","requestMediationLatency":389,"keyType":"PRODUCTION","apiId":"d642741c-b34a-4fde-8e47-5ef70455f638","applicationName":"test1","targetResponseCode":200,"requestTimestamp":"2025-
- 05-19T02:01:28.765Z","applicationOwner":"admin","userAgent":"PostmanRuntime","userName":"admin@carbon.super","apiResourceTemplate":"/*","regionId":"default","responseLatency":511,"responseMediationLatency":11,"userI
- p":"111.1.1.2","apiContext":"/user/1.0.0","applicationId":"a18b9944-5ddf-4708-9922-a45e04474f81","apiType":"HTTP","properties":{"commonName":"N/A","responseContentType":"application/json","subtype":"D
- EFAULT","isEgress":false,"apiContext":"/user-register/1.0.0","responseSize":0,"userName":"admin@carbon.super"}}
复制代码 发布到其它系统
可以二级开发/home/wso2carbon/wso2am-4.5.0/repository/components/plugins/org.wso2.am.analytics.publisher.client_1.2.23.jar这个模块,这里已经集成了远程推送和elk日志记录,我们可以扩展一个kafka推送,扩展完代码之后,进行编译,覆盖源来的jar包即可
- org.wso2.am.analytics.publisher.reporter.elk.ELKCounterMetric.java内容扩展
- @Override
- public int incrementCount(MetricEventBuilder builder) throws MetricReportingException {
- Map<String, Object> event = builder.build();
- String jsonString = gson.toJson(event);
- String jsonStringResult = jsonString.replaceAll("[\r\n]", "");
- log.info("apimMetrics: " + name.replaceAll("[\r\n]", "") + ", properties :" +
- jsonStringResult);
- KafkaMqProducer.publishEvent("apim-metrics", jsonStringResult);
-
- return 0;
- }
-
- $ mvn clean install -DskipTests -Dcheckstyle.skip
复制代码- /**
- * kafka生产者.
- */
- public class KafkaMqProducer {
- private final static String BOOTSTRAP_SERVER = ConfigFactory.getInstance().getStrPropertyValue("kafka.host");
- private static final Logger logger = LogManager.getLogger(KafkaMqProducer.class);
- private static KafkaProducer<String, String> producer;
- private static ExecutorService executorService = Executors.newFixedThreadPool(4);
- private static KafkaProducer<String, String> getProducer() {
- if (producer == null) {
- //reset thread context
- resetThreadContext();
- // create the producer
- producer = new KafkaProducer<String, String>(getProperties());
- }
- return producer;
- }
- public static void publishEvent(String topic, String value) {
- executorService.execute(() -> {
- try {
- // create a producer record
- ProducerRecord<String, String> eventRecord =
- new ProducerRecord<String, String>(topic, value);
- // send data - asynchronous
- getProducer().send(eventRecord, new Callback() {
- @Override
- public void onCompletion(RecordMetadata recordMetadata, Exception e) {
- if (e != null) {
- e.printStackTrace();
- }
- }
- });
- } catch (Exception ex) {
- logger.error("kafka.error", ex);
- }
- });
- }
- private static void resetThreadContext() {
- Thread.currentThread().setContextClassLoader(null);
- }
- public static Properties getProperties() {
- Properties properties = new Properties();
- properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVER);
- properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
- properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
- properties.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
- return properties;
- }
- }
复制代码 集成第三方组件
如果需要集成第三方组件,如kafka,rabbitmq这些,需要将他们的原始jar包添加到/home/wso2carbon/wso2am-4.5.0/lib目录下,你可以把这个目录当成是共享目录,这里的jar可以被其它模块加载,类似jboss中的模块,但咱们OSGi平台不需要通过jboss-deployment-structure.xml显示指定它,如果你是docker部署的,可以在原始镜像基础之上,添加这些jar包。
Dockerfile- # 基于官方 WSO2 APIM 镜像
- FROM wso2/wso2am:4.5.0
- # 第三方jar包,需要放到lib目录
- COPY lib/*.jar /home/wso2carbon/wso2am-4.5.0/lib/
- # 业务插件包,替换或覆盖目标 JAR 文件,重新构建docker镜像后需要更新一下values.yaml里的sha256这个值,告诉服务器使用最新的镜像
- COPY plugins/*.jar /home/wso2carbon/wso2am-4.5.0/repository/components/plugins/
复制代码 来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |