CDH 網絡接口速度抑制 (CDH Network Interface Speed Suppress)


問題描述

CDH 網絡接口速度抑制 (CDH Network Interface Speed Suppress)

我在 myCentOS 7.1 上配置了 CDH 5.5.2。除了以下與網絡相關的警告之外,一切都運行良好

網絡接口速度抑制...

以下網絡接口似乎以低於全速運行:virbr0‑nic .2 個主機網絡接口似乎在全速運行。對於 1 個主機網絡接口,Cloudera Manager Agent 無法確定雙工模式或接口速度。

任何人都可以幫我解決這個問題


參考解法

方法 1:

This is because by default CDH Hosts are required to be deployed on servers with 1GB or better networking interfaces. But you can always change the default configurations to meet your server configurations:

1‑ From Cloudera Manager Navigate to "Hosts ‑> All Hosts" and then click "Configuration" in this page.

2‑ In the search bar, search for "Network Interface".

3‑ Depending on the type of Network you are on, adjust the value of the 2 configuration parameters "Network Interface Expected Link Speed" and "Network Interface Expected Duplex Mode".

4‑ Deploy the new configuration and restart Cloudera Manager.

(by BruceWayneabolfazl shahbazi)

參考文件

  1. CDH Network Interface Speed Suppress (CC BY‑SA 2.5/3.0/4.0)

#hadoop #cloudera-cdh #centos7 #hadoop2






相關問題

hadoop -libjars dan ClassNotFoundException (hadoop -libjars and ClassNotFoundException)

基於感興趣的日期範圍作為參數輸入限制在 Pig Latin 中加載日誌文件 (Restricting loading of log files in Pig Latin based on interested date range as parameter input)

選擇 MapReduce 設計模式 (Choosing a MapReduce Design Pattern)

Lỗi phân vùng tùy chỉnh (Custom Partitioner Error)

Connection Refused - 為什麼 zookeeper 嘗試連接到 localhost 而不是服務器 ip (Connection Refused - Why does zookeeper tries to connect to localhost instead of a server ip)

現有表的 Hive 分桶和分區 (Hive bucketing and partition for existing table)

如何在 R 中讀取 HDFS 中的文件而不會丟失列名和行名 (How to read files in HDFS in R without loosing column and row names)

CDH 網絡接口速度抑制 (CDH Network Interface Speed Suppress)

Apache Apex 是依賴 HDFS 還是有自己的文件系統? (Does Apache Apex rely on HDFS or does it have its own file system?)

java.io.IOException:作業失敗!使用 hadoop-0.19.1 在我的 osx 上運行示例應用程序時 (java.io.IOException: Job failed! when running a sample app on my osx with hadoop-0.19.1)

如何使用 PIG 腳本驗證列表 (How to validate a list using PIG script)

使用 spark-submit 為 Spark Job 設置 HBase 屬性 (set HBase properties for Spark Job using spark-submit)







留言討論