一区二区三区三上|欧美在线视频五区|国产午夜无码在线观看视频|亚洲国产裸体网站|无码成年人影视|亚洲AV亚洲AV|成人开心激情五月|欧美性爱内射视频|超碰人人干人人上|一区二区无码三区亚洲人区久久精品

0
  • 聊天消息
  • 系統消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

樹莓派遇上ChatGPT,魔法熱線就此誕生!

上海晶珩電子科技有限公司 ? 2025-04-13 09:04 ? 次閱讀

盡管這種電話在幾十年前就已過時,但許多人都對旋轉撥號電話記憶猶新。這些舊電話,其實可以被改造成一個 ChatGPT 熱線。這個由 Pollux Labs 開發(fā)的項目,讓你可以將一部復古的旋轉撥號電話連接到樹莓派上,拿起聽筒、撥號,就能享受由 AI 驅動的對話,仿佛回到了傳統的電話時代。

樹莓派負責語音識別、文本生成和語音播放,ChatGPT 會記住通話中的每一句話。這意味著你可以體驗到將老式撥號與尖端人工智能相結合的獨特互動?,F在,讓我們來看看這是如何實現的。

將旋轉電話改造成 ChatGPT 熱線的理由

許多人喜歡使用旋轉電話的復古感,盡管通話內容是現代的,但它的撥號聲和重量能把你帶回過去。ChatGPT 增添了有趣且由語音驅動的體驗,與在鍵盤上打字完全不同。你還可以欣賞將電話的揚聲器、麥克風和撥號盤連接到樹莓派的工程挑戰(zhàn)。除此之外,這是一個有趣的方式,可以重用舊技術。

通過電話聽到 ChatGPT 的回應可以激發(fā)創(chuàng)造力。你可以在轉移到語音助手之前,整合音樂、新聞更新或引入其他 AI 服務。實踐是學習的關鍵,這個項目同時探索了硬件和軟件。它展示了簡單電子設備的靈活性。最重要的是,旋轉撥號與 AI 對話會讓任何嘗試過它的人感到驚喜和愉悅。

電話改造成 ChatGPT 熱線所需的必備物品

首先,你需要一部有足夠的空間容納樹莓派和電線的旋轉撥號電話。70 年代或 80 年代的電話型號通常內部空間較大,你可以整理電線而無需鉆孔。你至少需要一臺樹莓派 4B,但樹莓派 5 的性能會更好。

你還需要一個麥克風來捕捉音頻,并將樹莓派的音頻輸出連接到電話的揚聲器。一個 USB 領夾麥克風或小型 USB 麥克風適配器應該可以完美地安裝在機殼內部。

你可能會問,為什么要使用領夾麥克風而不是電話聽筒中內置的麥克風。事實證明,嘗試使用聽筒的麥克風很困難,特別是因為旋轉撥號電話中使用的麥克風是模擬的而不是數字的。

接下來,收集必要的電子工具,如烙鐵、剪線鉗和萬用表。這些工具可以幫助你確認撥號的脈沖線、測試連接,并將樹莓派的音頻輸出連接到電話的揚聲器線。你還需要與樹莓派 GPIO 引腳匹配的跳線或連接器,可能還需要一個小按鈕來檢測聽筒是在線還是離線。

在軟件方面,安裝用于語音識別、文本轉語音和 OpenAI APIPython 庫。獲取 OpenAI API 密鑰,并在你的 Python 腳本中引用它,以生成 ChatGPT 回復。

完成改造并構建 ChatGPT 熱線的步驟

將電話和樹莓派改造成新用途涉及仔細的接線和軟件配置。在此指南中,你將學習如何拆卸電話、識別撥號脈沖,并設置樹莓派以實現語音轉文本和文本轉語音轉換。仔細驗證每根電線和引腳分配,因為一個不匹配可能會導致錯誤。

1. 取下電話蓋,找到揚聲器線、旋轉撥號線,以及任何可以連接按鈕以檢測掛鉤狀態(tài)的地方。

48702934-1803-11f0-9434-92fbcf53809c.png

2. 剝去 3.5 毫米音頻電纜的外皮,將 2.8 毫米平板連接器焊接到電話聽筒的地線和一個聲道線上。然后,將其連接到聽筒的連接插座。

4890f2a4-1803-11f0-9434-92fbcf53809c.png48aeaa2e-1803-11f0-9434-92fbcf53809c.png

3. 在電話內部放置一個 USB 麥克風(或適配器),確保你的樹莓派可以清晰地接收聲音。

48be9722-1803-11f0-9434-92fbcf53809c.png

4. 使用萬用表確認哪些撥號線承載脈沖。將這些線連接到 GPIO 引腳和地線。然后,連接掛鉤按鈕,使軟件能夠感應到何時提起聽筒。

48d21da6-1803-11f0-9434-92fbcf53809c.png48e274bc-1803-11f0-9434-92fbcf53809c.png


48eee152-1803-11f0-9434-92fbcf53809c.png

5. 在你的樹莓派 上安裝必要的音頻庫,包括 PyAudio、PyGame 和 OpenAI 客戶端。下載或創(chuàng)建音頻文件(如撥號音)以供播放,并將你的 OpenAI 密鑰存儲在 .env 文件中。

6. 接下來,你需要一個 Python 腳本,用于從麥克風捕獲音頻,將其發(fā)送到 ChatGPT 進行處理,并通過電話揚聲器播放 AI 的回應。你可以編寫自己的腳本或使用 Pollux Labs 編寫的腳本。只需確保根據自己的需求調整 GPIO 引腳編號、音頻設置和特殊文本提示。

7. 手動運行腳本以確認其正常工作。一旦你聽到撥號音且 ChatGPT 對你的聲音做出回應,添加一個系統服務,以便在樹莓派啟動時自動啟動電話。

如果你無法訪問腳本,以下是供你參考的腳本。

#!/usr/bin/env python3"""ChatGPT for Rotary Phonehttps://en.polluxlabs.netMIT LicenseCopyright (c) 2025 Frederik KumbartzkiPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED,

INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE."""import osimport sysimport timeimport threadingfrom queue import Queuefrom pathlib import PathAudio and speech librariesos.environ['PYGAME_HIDE_SUPPORT_PROMPT'] ="hide"import pygameimport pyaudioimport numpy as npimport wavefrom openai import OpenAIOpenAI API Keyfrom dotenv import load_dotenvload_dotenv()OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")if not OPENAI_API_KEY:print("Error: OPENAI_API_KEY not found.")sys.exit(1)Hardware librariesfrom gpiozero import ButtonConstants and configurationsAUDIO_DIR ="/home/pi/Desktop/callGPT"AUDIO_FILES =

{"tone": f"{AUDIO_DIR}/a440.mp3","try_again": f"{AUDIO_DIR}/tryagain.mp3","error": f"{AUDIO_DIR}/error.mp3"}DIAL_PIN = 23# GPIO pin for rotary dialSWITCH_PIN = 17# GPIO pin for hook switchAudio parametersAUDIO_FORMAT = pyaudio.paInt16CHANNELS = 1SAMPLE_RATE = 16000CHUNK_SIZE = 1024SILENCE_THRESHOLD = 500MAX_SILENCE_CHUNKS = 20# About 1.3 seconds of silenceDEBOUNCE_TIME = 0.1# Time in seconds for debouncing button inputsclass AudioManager:"""Manages audio playback and recording."""def__init__(self):pygame.mixer.init(frequency=44100, buffer=2048)self.playing_audio = Falseself.audio_thread = NoneCreate temp directoryself.temp_dir

= Path(__file__).parent /"temp_audio"self.temp_dir.mkdir(exist_ok=True)Preload soundsself.sounds = {}for name, path in AUDIO_FILES.items():try:self.sounds[name] = pygame.mixer.Sound(path)except:print(f"Error loading {path}")def play_file(self, file_path, wait=True):try:sound = pygame.mixer.Sound(file_path)channel = sound.play()if wait and channel:while channel.get_busy():pygame.time.Clock().tick(30)except:pygame.mixer.music.load(file_path)pygame.mixer.music.play()if wait:while pygame.mixer.music.get_busy():pygame.time.Clock().tick(30)def start_continuous_tone(self):self.playing_audio = Trueif self.audio_thread and self.audio_thread.is_alive():self.playing_audio = Falseself.audio_thread.join(timeout=1.0)self.audio_thread = threading.Thread(target=self._play_continuous_tone)self.audio_thread.daemon = Trueself.audio_thread.start()def _play_continuous_tone(self):try:if"tone"in self.sounds:self.sounds["tone"].play(loops=-1)while self.playing_audio:time.sleep(0.1)self.sounds["tone"].stop()else:pygame.mixer.music.load(AUDIO_FILES["tone"])pygame.mixer.music.play(loops=-1)while self.playing_audio:time.sleep(0.1)pygame.mixer.music.stop()except Exception as e:print(f"Error during tone playback: {e}")def stop_continuous_tone(self):self.playing_audio = Falseif"tone"in self.sounds:self.sounds["tone"].stop()if pygame.mixer.get_init() and pygame.mixer.music.get_busy():pygame.mixer.music.stop()class SpeechRecognizer:"""Handles real-time speech recognition using OpenAI's Whisper API."""def__init__(self, openai_client):self.client = openai_clientself.audio = pyaudio.PyAudio()self.stream = Nonedef capture_and_transcribe(self):Setup audio stream if not already initializedif not self.stream:self.stream = self.audio.open(format=AUDIO_FORMAT,channels=CHANNELS,rate=SAMPLE_RATE,input=True,frames_per_buffer=CHUNK_SIZE,)Set up queue and threadingaudio_queue = Queue()stop_event = threading.Event()Start audio capture threadcapture_thread = threading.Thread(target=self._capture_audio,args=(audio_queue, stop_event))capture_thread.daemon = Truecapture_thread.start()Process the audioresult = self._process_audio(audio_queue, stop_event)Cleanupstop_event.set()capture_thread.join()return resultdef _capture_audio(self, queue, stop_event):while not stop_event.is_set():try:data = self.stream.read(CHUNK_SIZE, exception_on_overflow=False)queue.put(data)except KeyboardInterrupt:breakdef _process_audio(self, queue, stop_event):buffer = b""speaking = Falsesilence_counter = 0while not stop_event.is_set():if not queue.empty():chunk = queue.get()Check volumedata_np = np.frombuffer(chunk, dtype=np.int16)volume = np.abs(data_np).mean()Detect speakingif volume > SILENCE_THRESHOLD:speaking = Truesilence_counter = 0elif speaking:silence_counter += 1Add chunk to bufferbuffer += chunkProcess if we've detected end of speechif speaking and silence_counter > MAX_SILENCE_CHUNKS:print("Processing speech...")Save to temp filetemp_file = Path(__file__).parent /"temp_recording.wav"self._save_audio(buffer, temp_file)Transcribetry:return self._transcribe_audio(temp_file)except Exception as e:print(f"Error during transcription: {e}")buffer = b""speaking = Falsesilence_counter = 0return Nonedef _save_audio(self, buffer, file_path):with wave.open(str(file_path),"wb") as

wf:wf.setnchannels(CHANNELS)wf.setsampwidth(self.audio.get_sample_size(AUDIO_FORMAT))wf.setframerate(SAMPLE_RATE)wf.writeframes(buffer)def _transcribe_audio(self, file_path):with open(file_path,"rb") as audio_file:transcription = self.client.audio.transcriptions.create(model="whisper-1",file=audio_file,language="en")return transcription.textdef cleanup(self):if self.stream:self.stream.stop_stream()self.stream.close()self.stream = Noneif self.audio:self.audio.terminate()self.audio = Noneclass ResponseGenerator:"""Generates and speaks streaming responses from OpenAI's API."""def__init__(self, openai_client, temp_dir):self.client = openai_clientself.temp_dir = temp_dirself.answer =""def generate_streaming_response(self, user_input, conversation_history=None):self.answer =""collected_messages = []chunk_files = []Audio playback queue and control variablesaudio_queue = Queue()playing_event = threading.Event()stop_event = threading.Event()Start the audio playback threadplayback_thread = threading.Thread(target=self._audio_playback_worker,args=(audio_queue, playing_event, stop_event))playback_thread.daemon = Trueplayback_thread.start()Prepare messagesmessages = [{"role":"system","content":"You are a humorous conversation partner engaged in a natural phone call. Keep your answers concise and to the point."}]Use conversation history if available, but limit to last 4 pairsif conversation_history and len(conversation_history) > 0:if len(conversation_history) > 8:conversation_history = conversation_history[-8:]messages.extend(conversation_history)else:messages.append({"role":"user","content": user_input})Stream the responsestream = self.client.chat.completions.create(model="gpt-4o-mini",messages=messages,stream=True)Variables for sentence chunkingsentence_buffer =""chunk_counter = 0for chunk in stream:if chunk.choices and hasattr(chunk.choices[0], 'delta') and hasattr(chunk.choices[0].delta, 'content'):content = chunk.choices[0].delta.contentif content:collected_messages.append(content)sentence_buffer += contentProcess when we have a complete sentence or phraseif any(end in content for end in [".","!","?",":"]) or len(sentence_buffer) > 100:Generate speech for this chunkchunk_file_path = self.temp_dir / f"chunk_{chunk_counter}.mp3"try:Generate speechresponse = self.client.audio.speech.create(model="tts-1",voice="alloy",input=sentence_buffer,speed=1.0)response.stream_to_file(str(chunk_file_path))chunk_files.append(str(chunk_file_path))Add to playback queueaudio_queue.put(str(chunk_file_path))Signal playback thread if it's waitingplaying_event.set()except Exception as e:print(f"Error generating speech for chunk: {e}")Reset buffer and increment countersentence_buffer =""chunk_counter += 1Process any remaining textif sentence_buffer.strip():chunk_file_path = self.temp_dir / f"chunk_{chunk_counter}.mp3"try:response = self.client.audio.speech.create(model="tts-1",voice="alloy",input=sentence_buffer,speed=1.2)response.stream_to_file(str(chunk_file_path))chunk_files.append(str(chunk_file_path))audio_queue.put(str(chunk_file_path))playing_event.set()except Exception as e:print(f"Error generating final speech chunk: {e}")Signal end of generationaudio_queue.put(None)# Sentinel to signal end of queueWait for playback to completeplayback_thread.join()stop_event.set()# Ensure the thread stopsCombine all messagesself.answer ="".join(collected_messages)print(self.answer)Clean up temp filesself._cleanup_temp_files(chunk_files)return self.answerdef _audio_playback_worker(self, queue, playing_event, stop_event):while not stop_event.is_set():Wait for a signal that there's something to playif queue.empty():playing_event.wait(timeout=0.1)playing_event.clear()continueGet the next file to playfile_path = queue.get()None is our sentinel value to signal end of queueif file_path is None:breaktry:Play audio and wait for completionpygame.mixer.music.load(file_path)pygame.mixer.music.play()Wait for playback to complete before moving to next chunkwhile pygame.mixer.music.get_busy() and not stop_event.is_set():pygame.time.Clock().tick(30)Small pause between chunks for more natural flowtime.sleep(0.05)except Exception as e:print(f"Error playing audio chunk: {e}")def _cleanup_temp_files(self, file_list):Wait a moment to ensure files aren't in usetime.sleep(0.5)for file_path in file_list:try:if os.path.exists(file_path):os.remove(file_path)except Exception as e:print(f"Error removing temp file: {e}")class RotaryDialer:"""Handles rotary phone dialing and services."""def__init__(self, openai_client):self.client = openai_clientself.audio_manager = AudioManager()self.speech_recognizer = SpeechRecognizer(openai_client)self.response_generator = ResponseGenerator(openai_client, self.audio_manager.temp_dir)Set up GPIOself.dial_button = Button(DIAL_PIN, pull_up=True)self.switch = Button(SWITCH_PIN, pull_up=True)State variablesself.pulse_count = 0self.last_pulse_time = 0self.running = Truedef start(self):Set up callbacksself.dial_button.when_pressed = self._pulse_detectedself.switch.when_released = self._handle_switch_releasedself.switch.when_pressed = self._handle_switch_pressedStart in ready stateif not self.switch.is_pressed:Receiver is picked upself.audio_manager.start_continuous_tone()else:Receiver is on hookprint("Phone in idle state. Pick up the receiver to begin.")print("Rotary dial ready. Dial a number when the receiver is picked up.")try:self._main_loop()except KeyboardInterrupt:print("Terminating...")self._cleanup()def _main_loop(self):while self.running:self._check_number()time.sleep(0.1)def _pulse_detected(self):if not self.switch.is_pressed:current_time = time.time()if current_time - self.last_pulse_time >

DEBOUNCE_TIME:self.pulse_count += 1self.last_pulse_time = current_timedef _check_number(self):if not self.switch.is_pressed and self.pulse_count > 0:self.audio_manager.stop_continuous_tone()time.sleep(1.5)# Wait between digitsif self.pulse_count == 10:self.pulse_count = 0# "0" is sent as 10 pulsesprint("Dialed service number:", self.pulse_count)if self.pulse_count == 1:self._call_gpt_service()Return to dial tone after conversationif not self.switch.is_pressed:

# Only if the receiver wasn't hung upself._reset_state()self.pulse_count = 0def _call_gpt_service(self):Conversation history for contextconversation_history = []first_interaction = TrueFor faster transitionsspeech_recognizer = self.speech_recognizerresponse_generator = self.response_generatorPreparation for next recordingnext_recording_thread = Nonenext_recording_queue = Queue()Conversation loop - runs until the receiver is hung upwhile not self.switch.is_pressed:If there's a prepared next recording thread, use its resultif next_recording_thread:next_recording_thread.join()recognized_text = next_recording_queue.get()next_recording_thread = Noneelse:Only during first iteration or as fallbackprint("Listening..."+ (" (Speak now)"if first_interactionelse""))

first_interaction = FalseStart audio processingrecognized_text = speech_recognizer.capture_and_transcribe()if not recognized_text:print("Could not recognize your speech")self.audio_manager.play_file(AUDIO_FILES["try_again"])continueprint("Understood:", recognized_text)Update conversation historyconversation_history.append({"role":"user","content": recognized_text

})

Start the next recording thread PARALLEL to API responsenext_recording_thread = threading.Thread(target=self._background_capture,args=(speech_recognizer, next_recording_queue))next_recording_thread.daemon = Truenext_recording_thread.start()Generate the responseresponse = response_generator.generate_streaming_response(recognized_text, conversation_history)Add response to historyconversation_history.append({"role":"assistant","content": response})Check if the receiver was hung up in the meantimeif self.switch.is_pressed:breakIf we get here, the receiver was hung upif next_recording_thread and next_recording_thread.is_alive():next_recording_thread.join(timeout=0.5)def _background_capture(self, recognizer, result_queue):try:result = recognizer.capture_and_transcribe()result_queue.put(result)except Exception as e:print

(f"Error in background recording: {e}")result_queue.put(None)def _reset_state(self):self.pulse_count = 0self.audio_manager.stop_continuous_tone()self.audio_manager.start_continuous_tone

()print("Rotary dial ready. Dial a number.")def _handle_switch_released(self):print("Receiver picked up - System restarting")self._restart_script()def _handle_switch_pressed(self):print("Receiver hung up - System

terminating")self._cleanup()self.running = FalseComplete termination after short delaythreading.Timer(1.0, self._restart_script).start()returndef _restart_script(self):print("Script restarting...")self.audio_manager.stop_continuous_tone()os.execv(sys.executable, ['python'] + sys.argv)def _cleanup(self):Terminate Audio Managerself.audio_manager.stop_continuous_tone()Terminate Speech Recognizer if it existsif hasattr(self, 'speech_recognizer') and self.speech_recognizer:self.speech_recognizer.cleanup()print("Resources have been released.")def main():Initialize OpenAI clientclient = OpenAI(api_key=OPENAI_API_KEY)Create and start the rotary dialerdialer = RotaryDialer(client)dialer.start

()print("Program terminated.")ifname=="__main__":main()

回顧你所做的任何連接或配置的優(yōu)化。每種電話型號都有細微差別,因此你可能需要進行一些實驗。在電話外殼內給樹莓派通電,以簡化故障排除,直到你確信一切正常。一旦你通過聽筒聽到 ChatGPT 的回應,你的旋轉電話熱線就幾乎完成了。

享受復古體驗,同時訪問現代 AI

用 ChatGPT 為舊式旋轉電話注入活力,將懷舊與創(chuàng)新融為一體。撥打電話給 AI,展示了過去與現在技術之間的神奇互動。隨著時間的推移,你可以通過不同的聲音、語言或提示來個性化你的腳本,以改變 ChatGPT 的回應。

你甚至可以整合更多的 AI 服務,用于閱讀新聞、播放播客或安排日程。這個項目可以讓你接觸電子設備和 Python 編程,因為你正在彌合模擬電話和數字 AI 之間的鴻溝。完成此項目后,你將擁有一部功能齊全的旋轉電話,它充當 ChatGPT 的熱線。享受你的復古未來主義對話,并探索為你的電話 AI 伙伴的新想法。

參考文章:

https://www.xda-developers.com/take-chatgpt-retro-raspberry-pi-powered-rotary-phone-hotline/

如果覺得文章不錯記得點贊,收藏,關注,轉發(fā)~

聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網站授權轉載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規(guī)問題,請聯系本站處理。 舉報投訴
  • 樹莓派
    +關注

    關注

    120

    文章

    1906

    瀏覽量

    106691
  • ChatGPT
    +關注

    關注

    29

    文章

    1584

    瀏覽量

    8661
收藏 人收藏

    評論

    相關推薦

    關于樹莓那個標簽2011.12

    話咨詢這個問題。其實沒什么,2011.12,不是大家想的這個樹莓的生產日期,更加不是中國進行翻版的時間,2011.12只是一個紀念日罷了。因為第一個樹莓
    發(fā)表于 09-10 18:58

    樹莓的深刻含義

    要想玩轉樹莓,首先得知道樹莓是什么。在本節(jié)中,作者將帶領大家揭開樹莓的神秘面紗,了解
    發(fā)表于 08-06 06:10

    樹莓4B+古董PC1500智能編曲音源合成器實驗

    樹莓4B+古董PC1500wifi通訊,關于ChatGPT的MudicLM智能AI編曲,控制音源芯片聲卡合成器實驗。
    發(fā)表于 03-07 10:52

    樹莓創(chuàng)始人Eben中國行,聯手創(chuàng)客改變世界

    Eben Upton,樹莓基金會的共同創(chuàng)始人,被譽為樹莓這個神奇機器背后的魔法師,現在他即將啟動中國之旅,將這
    發(fā)表于 07-29 15:36 ?4210次閱讀

    樹莓裝機教程

    樹莓裝機教程樹莓裝機教程樹莓裝機教程樹莓
    發(fā)表于 11-25 10:14 ?52次下載

    樹莓的種類_樹莓安裝教程

    樹莓(Raspberry Pi)是尺寸僅有信用卡大小的一個小型電腦,您可以將樹莓連接電視、顯示器、鍵盤鼠標等設備使用。目前,樹莓
    發(fā)表于 11-27 22:01 ?5493次閱讀

    樹莓3wifi配置_樹莓3開啟wifi熱點_樹莓3的wifi使用教程

    樹莓3在2016年2月29號正式發(fā)布了,樹莓3幾乎和樹莓2代板型一致,大外觀沒什么變化小電
    發(fā)表于 12-08 11:47 ?3.1w次閱讀

    樹莓3硬件配置_樹莓3都能裝什么系統_樹莓3系統安裝教程

    樹莓3一直頗受電子發(fā)燒友的青睞,這篇文章主要討論的就是樹莓3的硬件配置、樹莓3都能裝什么系
    發(fā)表于 12-08 14:36 ?2.7w次閱讀

    樹莓有什么用_樹莓能用來做啥_樹莓新手入門教程

    本文首先介紹了樹莓的功能,其次介紹了樹莓的用途,最后詳細介紹了樹莓新手入門教程。
    的頭像 發(fā)表于 05-08 14:15 ?3.4w次閱讀

    樹莓是什么樹莓的簡單介紹

    要想玩轉樹莓,首先得知道樹莓是什么。在本節(jié)中,作者將帶領大家揭開樹莓的神秘面紗,了解
    發(fā)表于 05-15 18:09 ?30次下載
    <b class='flag-5'>樹莓</b><b class='flag-5'>派</b>是什么<b class='flag-5'>樹莓</b><b class='flag-5'>派</b>的簡單介紹

    樹莓3和樹莓4的原理圖免費下載

    本文檔的主要內容詳細介紹的是樹莓3和樹莓4的原理圖免費下載。
    發(fā)表于 01-07 10:23 ?94次下載
    <b class='flag-5'>樹莓</b><b class='flag-5'>派</b>3和<b class='flag-5'>樹莓</b><b class='flag-5'>派</b>4的原理圖免費下載

    樹莓3和樹莓4的原理圖免費下載

    本文檔的主要內容詳細介紹的是樹莓3和樹莓4的原理圖免費下載。
    發(fā)表于 01-07 10:23 ?228次下載
    <b class='flag-5'>樹莓</b><b class='flag-5'>派</b>3和<b class='flag-5'>樹莓</b><b class='flag-5'>派</b>4的原理圖免費下載

    樹莓控制步進電機

    樹莓控制步進電機 前言 設備 連接 源碼 前言 測試步進電機 設備 名稱 型號 樹莓 3B+ 步進電機 28BYJ-48-5V 步進電機驅動板 UL2003芯片驅動板連接
    發(fā)表于 03-21 11:39 ?0次下載
    <b class='flag-5'>樹莓</b><b class='flag-5'>派</b>控制步進電機

    樹莓是x86還是arm

    樹莓(Raspberry Pi)是一款由英國樹莓基金會(Raspberry Pi Foundation)開發(fā)的微型計算機。它基于ARM架構,而非x86架構。 一、
    的頭像 發(fā)表于 08-30 15:42 ?2074次閱讀

    樹莓GUI應用開發(fā):從零到炫酷的魔法之旅!

    各位樹莓的粉絲們!今天我要帶你們踏上一段神奇的旅程——探索樹莓派上GUI應用的無限可能!你是不是覺得樹莓只能用來跑跑服務器、做個簡單的項
    的頭像 發(fā)表于 04-04 09:03 ?313次閱讀
    <b class='flag-5'>樹莓</b><b class='flag-5'>派</b>GUI應用開發(fā):從零到炫酷的<b class='flag-5'>魔法</b>之旅!