基于ANSYS發(fā)動(dòng)機(jī)曲軸有限元分析
基于ANSYS發(fā)動(dòng)機(jī)曲軸有限元分析,基于,ansys,發(fā)動(dòng)機(jī),曲軸,有限元分析
畢 業(yè) 設(shè) 計(jì)(論 文)外 文 參 考 資 料 及 譯 文
譯文題目: Autonomous Intelligent Vehicles
自動(dòng)駕駛智能汽車(chē)
學(xué)生姓名:
專(zhuān) 業(yè):
所在學(xué)院:
指導(dǎo)教師:
職 稱:
說(shuō)明:
要求學(xué)生結(jié)合畢業(yè)設(shè)計(jì)(論文)課題參閱一篇以上的外文資料,并翻譯至少一萬(wàn)印刷符(或譯出3千漢字)以上的譯文。譯文原則上要求打?。ㄈ缡謱?xiě),一律用400字方格稿紙書(shū)寫(xiě)),連同學(xué)校提供的統(tǒng)一封面及英文原文裝訂,于畢業(yè)設(shè)計(jì)(論文)工作開(kāi)始后2周內(nèi)完成,作為成績(jī)考核的一部分。
Chapter 2 The State-of-the-Art in the USA
2.1 Introduction
The field of intelligent vehicles is rapidly growing all over the world, both in the diversity of applications and research [3, 8, 18]. Especially in the U.S., government agencies, universities, and companies working on this hope to develop autonomous driving entirely or in part for safety and for saving more energy. Many previous technologies, such as seat belts, air bags, work only after a traffic accident. Only intelligent vehicles can stop traffic accidents from happening in the first place. There- fore, DARPA has organized the Grand Challenges and the Urban Challenge from 2004 to 2007, which remarkably promoted the technologies of intelligent vehicles around the world. Hence, this chapter presents an overview of the most advanced intelligent vehicle projects which once attended either the Grand Challenges or the Urban Challenge supported by the DARPA in the USA.
2.2 Carnegie Mellon University—Boss
The research groups at Carnegie Mellon University had developed the Navlab series [8, 17], from Navlab 1 to 11, which include robot cars, tracks, and buses. The Navlab’s applications have included Supervised Classification Applied to Road Following (SCARF) [6, 7], Yet Another Road Following (YARF) [12], Autonomous Land Vehicle In a Neural Net (ALVINN) [11], Rapidly Adapting Lateral Position Handler (RALPH) system [16]. In addition, Sandstorm is an autonomous vehicle which was modified from the High Mobility Multipurpose Wheeled Vehicle (HMMWV) and competed in the DARPA Grand Challenge in 2005. The Highlander is another autonomous vehicle modified from HMMWV H1 which competed in same competition in 2005.
Nevertheless, the latest intelligent vehicle is the Boss system (shown in Fig. 2.1) which won the first place in 2007 Grand Challenge [18]. Boss combines various active and passive sensors to provide faster and safer autonomous driving in an urban environment. Active sensors include lidar and radar, and passive sensors include the Point Grey high-dynamic-range camera. The following functional modules were implemented on the Boss vehicle:
Fig. 2.1 The intelligent vehicle, named Boss, developed by Carnegie Mellon University’s Red Team (published courtesy of Carnegie Mellon University)
1. Environment perception: Basically, the perception module provides a list of tracked moving objects, static obstacles in a regular grid, and vehicle localization relative to roads, road shape, etc. Furthermore, this module consists of four sub- systems, moving obstacle detection and tracking, static obstacle detection and tracking, roadmap localization, and road shape estimation.
2. A three-layer planning system consisting of mission, behavioral, and motion planning is used to drive in urban environments. Mission planning is to detect obstacles and plan new route to its goal. Here, given Road Network Definition File (RNDF) encoding environment connectivity, a cost graph guides vehicles to travel on a road/lane planned by the behavioral subsystem. A value function is calculated to both provide the path from each way point to target way point, and allow the navigation system to respond when an error occurs. Furthermore, Boss is capable of planning another route if there is a blockage.
The behavioral subsystem is in charge of executing the rules generated by the mission planning. In details, this subsystem makes decisions on lane-change, precedence, and safety decisions on different driving contexts, such as roads, intersections. Furthermore, this subsystem needs to complete the tasks, including carrying out the rules generated by the previous mission planner, responding to abnormal conditions, and identifying driving contexts, roads, interactions, and zones. Further- more, these driving contexts correspond to different behavior strategies consisting of lane driving, intersection handling, and achieving a zone pose. The third layer of the planning system is the motion planning subsystem which consists of trajec- tory generation, on-road navigation, and zone navigation. This layer is responsible for executing the current motion goal from the behavior subsystem. In general, this subsystem generates a path towards the target, and tracks the path.
Fig. 2.2 The Stanford University’s intelligent vehicle Junior that was the runner-up in the 2007 DARPA Urban Challenge (published courtesy of Stanford University)
2.3 Stanford University—Junior
The Stanford University’s research team on intelligent vehicles has been one of the most experienced and successful research labs in the world. To better study and promote the applications of autonomous intelligent vehicles, the Volkswagen group founded the Volkswagen Automotive Innovation Laboratory (VAIL). Until now, Stanford University collaborated with the Volkswagen Group and built several intelligent vehicles, the Stanley (the autonomous Volkswagen Touareg that won the DARPA Grand Challenge in 2005 [10]), Junior (the autonomous Volkswagen Passat that was the runner-up in 2007 DARPA Urban Challenge [14]). Moreover, Google has licensed the sensing technology from Stanley to map out 3D digital cities all over the world. We will introduce Junior that participated in the 2007 Urban Challenge below.
Junior [14], shown in Fig. 2.2, is a modified 2006 Volkswagen Passat wagon, equipped with five laser range finders, a GPS/INS, five radars, two Intel quad core computer systems, and a custom drive-by-wire interface. Hence, this vehicle is capable of detecting an obstacle up to 120 m away.
Junior’s software architecture is designed as a data-driven pipeline and consists of five modules:
? Sensor interface: This interface provides data for other modules.
? Perception modules: These modules segment sensor data into moving vehicles and static obstacles, and also provide accurate position relative to the digital map of the environment.
? Navigation modules: These modules consist of motion planners, a hierarchical finite state machine, and generate the behavior of the vehicle.
? Drive-by-wire interface: This interface receives the control commands from navigation modules, and enables the control of throttles, brakes, steering wheels, gear shifting, turn signals, and emergency brake.
? Global services: The system can provide logging, time stamping, message- passing support, and watch-dog functions to keep the system running reliably.
Furthermore, we introduce three fundamental modules: environment perception, precision localization, and navigation. In the perception module, there are two basic functions, static/dynamic obstacle detection and tracking, RNDF localization and update, where lasers implement primary scanning, and a radar system works as an early warning for moving objects in intersections as complement. After perceiving traffic environment, Junior estimates a local alignment between a digital map in the RNDF form and its current position from local sensors. In navigation module, the first task is to plan global paths, where there are two navigation cases, road navigation and free-style navigation. However, basic navigation modules do not include intersections. Furthermore, Junior strives to prevent itself from getting stuck in behavior hierarchy.
Nowadays, researchers at Stanford University are still working on autonomous parking in tight parking spots1 and autonomous valet parking.
2.4 Virginia Polytechnic Institute and State University—Odin
The team VictorTango formed by Virginia Tech and TORC Technologies developed Odin2 [2], which took the third place in 2004 DARPA Grand Challenge. The Odin consists of three main parts: base vehicle body, perception, and planning.
Now, we introduce the base vehicle platform. Odin is a modified 2005 Hybrid Ford Escape, shown in Fig. 2.3. Its main computing platform is a pair of HP servers, each with two quad-core processors.
In the perception module, there are three submodules: object classification, localization, and road detection. Here, object classification first detects obstacles and then classifies them as either static or dynamic. The localization submodule yields the vehicle position and direction in the 3D world. The road detection submodule extracts a road coverage map and lane position.
The planning module uses a Hybrid Deliberative-Reactive model, which consists of upper level decisions and lower level reactions as separate components. The coarsest level of planning is the route planner responsible for road segments and zones the vehicle should travel in. The driving behavior component takes care of obeying road rules. Motion planning is in charge of translating control commands into actuator control signals.
Fig. 2.3 The intelligent vehicle Odin developed by the Team VictorTango (published courtesy of Virginia Polytechnic Institute and State University)
2.5 Massachusetts Institute of Technology—Talos
Team MIT has developed an urban autonomous vehicle, called Talos3 (shown in Fig. 2.4) [1, 9, 13]. There are three key novel features: (i) perception-based navigation strategy; (ii) a unified planning and control architecture; (iii) a powerful new software infrastructure. Moreover, this vehicle consists of various submodules: Road Paint Detector, Navigator, Lane Tracker, Driveability Map, Obstacle Detector, Motion Planner, Fast Vehicle Detector, Controller, Positioning Modules. The perception module includes obstacle detector, hazard detector and lane tracking sub- modules. Planning a control algorithm involves using a navigator, driveability map, motion planner, and a controller. The navigator plays an important role in mission- level behavior, and the rest of these submodules work together in a tight coupling to yield the desired motion control goal in complex driving conditions.
Fig. 2.4 The intelligent vehicle Talos developed by the Team MIT (published courtesy of Massachusetts Institute of Technology)
Fig. 2.5 The intelligent vehicle Skynet developed by Team Cornell (published courtesy of Cornell University)
2.6 Cornell University—Skynet
Team Cornell’s Skynet4 is a modified Chevrolet Tahoe, shown in Fig. 2.5, and consists of two groups of sensors [15]. One group is used for sensing vehicle itself, and the other group (laser, radar and vision) is for sensing the environment. Thanks to the above sensors, Skynet is capable of providing real-time position, velocity, and attitude for absolute positioning. Moreover, Skynet’s local map including obstacle detection information is the map of local environment surrounding Skynet. In many cases, autonomous driving in complex scenes is more than basic obstacle avoidance. Hence, the vehicle-centric local map is not enough for absolute positioning. We need to estimate environment structures using posterior pose and track generator algorithms.
Skynet is using the probabilistic representation of the environment to plan mission paths within the context of the rule-based road network. One intelligent planner includes three primary layers: a behavioral layer, a tactical layer, and a operational layer. The goal of the behavior layer is to determine the fastest route to the next mission point. When there exist state transitions in the behavior layer, the corresponding component of the tactical layer is executed. Among the four tactical components, the road tactical component is to seek a proper lane and to monitor other agents in the same and neighboring lanes. The intersection tactical component handles intersection queuing behavior and safe merging. The zone tactical component takes care of basic navigation in unconstrained cases. The blockage tactical component implements obstacle detection and judging whether there are temporary traffic jams, and acts accordingly. The final layer is an operational layer which is in charge of converting local driving boundaries and a reference speed into actuators, steering wheels, throttles, and brakes.
2.7 University of Pennsylvania and Lehigh University—Little Ben
Little Ben5 designed by the Ben Franklin Racing Team is a modified Toyota Prius with various sensors and computers for the 2007 DARPA Urban Challenge [4], shown in Fig. 2.6. Similar to other intelligent vehicles, Little Ben is equipped with various sensors, such as three LMS291, two SICK LDRS, and a Bumble bee stereo camera. The sensor array provides timely information about the surrounding environment, which is integrated into a dynamic map for environment perception and modeling.
Little Ben’s software framework consists of perception, planning, and control. Its perception module is responsible for providing static obstacles, moving vehicles, lane markings, and traversable ground. Little Ben’s primary medium-to-long-range lidars are responsible for geometric obstacles and ground classification, road making extraction, and dynamic obstacle tracking. Moreover, the stereo vision system is used to detect close road makings. Once the perception module generates information about static obstacles, dynamic obstacles, and lane markings, the MapPlan module will update obstacles and lane marking likelihoods in a map centered at the current vehicle location. The mission and path planning consists of two stages. The first stage is to calculate the optional path by minimizing the mission time. The next stage is to incorporate the dynamic map into new path planning. Afterwards, the path follower module is responsible for calculating the vehicle steering and throttle- brake commands to follow the desired trajectory.
Fig. 2.6 The intelligent vehicle Little Ben (published courtesy of the University of Pennsylvania and Lehigh University)
第二章 美國(guó)的先進(jìn)技術(shù)
2.1、簡(jiǎn)介
無(wú)論是在應(yīng)用的多樣性還是在研究的多樣性,智能汽車(chē)領(lǐng)域都在世界各地迅速發(fā)展[ 3,8,18 ]。特別是在美國(guó),為了安全和節(jié)約更多的能源,政府機(jī)構(gòu)、大學(xué)和公司都會(huì)投入全部或部分資本用于發(fā)展自動(dòng)駕駛技術(shù)。許多以前的技術(shù),如安全帶,氣囊,它們都是交通事故后才工作的。只有在智能汽車(chē)才能在第一時(shí)間阻止交通事故的發(fā)生。因此,DARPA舉辦了Grand Challenges 和從2004年到2007年的Urban Challenge,這些挑戰(zhàn)賽顯著促進(jìn)了全球智能汽車(chē)技術(shù)的發(fā)展。因此,本章簡(jiǎn)要介紹了最先進(jìn)的智能汽車(chē)項(xiàng)目,這些項(xiàng)目曾參加過(guò)由美國(guó)DARPA舉辦的Grand Challenges或Urban Challenge。
2.2、卡內(nèi)基·梅隆大學(xué)——Boss
卡內(nèi)基?梅隆大學(xué)的研究小組已經(jīng)開(kāi)發(fā)出了從Navlab 1 到Navlab 11的Navlab 系列[ 8,17 ],其中包括自動(dòng)汽車(chē)、自動(dòng)追蹤和自動(dòng)公共汽車(chē)。Navlab的應(yīng)用包括:應(yīng)用于道路追蹤的管理分類(lèi)(SCARF)[ 6,7 ],另一種道路追蹤的分類(lèi)(YARF)[ 12 ],采用神經(jīng)網(wǎng)絡(luò)的自動(dòng)駕駛汽車(chē)(ALVINN)[ 11 ],快速適應(yīng)橫向位置的處理程序(RALPH) [ 16 ]。此外,Sandstorm是一個(gè)從高機(jī)動(dòng)多用途輪式汽車(chē)(HMMWV)改裝來(lái)的自動(dòng)駕駛汽車(chē),它在2005年參加了DARPA舉辦的Grand Challenges。Highlander是另一個(gè)從悍馬 H1改裝來(lái)的自動(dòng)駕駛汽車(chē),它在2005年參加同樣的比賽。
盡管如此,最新的智能汽車(chē)是Boss(如圖2.1所示),它在2007 年Grand Challenges中獲得第一名[18]。Boss把各種主動(dòng)的和被動(dòng)的傳感器結(jié)合起來(lái),在城市環(huán)境中提供更快、更安全的自動(dòng)駕駛技術(shù)。主動(dòng)傳感器包括雷達(dá)和信號(hào)處理,被動(dòng)傳感器包括Point Grey高動(dòng)態(tài)范圍的相機(jī)。
在Boss上實(shí)現(xiàn)了以下功能模塊:
1、 環(huán)境感知:基本上,感知模塊提供了一個(gè)移動(dòng)物體、在規(guī)則網(wǎng)格中的靜態(tài)障礙和相對(duì)于道路的車(chē)輛定位、道路形狀等的跟蹤列表。此外,該模塊包括四個(gè)子系統(tǒng):移動(dòng)障礙物檢測(cè)和跟蹤、靜態(tài)障礙檢測(cè)和跟蹤、路線圖定位、和道路形狀估計(jì)。
2、 一個(gè)用于城市環(huán)境中的由任務(wù)規(guī)劃、行為規(guī)劃和運(yùn)動(dòng)規(guī)劃組成的規(guī)劃系統(tǒng)。任務(wù)規(guī)劃的作用是檢測(cè)障礙和制定到達(dá)目標(biāo)的新路線。在這里與指定道路網(wǎng)絡(luò)定義文件(RNDF)的編碼環(huán)境連接,行為規(guī)劃子系統(tǒng)用于制定路線圖并引導(dǎo)車(chē)輛在路上行駛。其主要的功能是既可以計(jì)算從每個(gè)路徑點(diǎn)到目標(biāo)點(diǎn)的路徑,又可以在導(dǎo)航系統(tǒng)發(fā)生錯(cuò)誤時(shí)做出響應(yīng)。此外,如果道路堵塞的話,Boss有能力規(guī)劃另外一條路線。
圖2.1、由卡內(nèi)基·梅隆大學(xué)的 Red Team 開(kāi)發(fā)的 Boss 智能汽車(chē)(由卡內(nèi)基·梅隆大學(xué)研制)
行為規(guī)劃負(fù)責(zé)執(zhí)行任務(wù)規(guī)劃所產(chǎn)生的任務(wù)。在細(xì)節(jié)上,該子系統(tǒng)在不同的駕駛環(huán)境,如道路、交叉口,對(duì)變更車(chē)道、優(yōu)先級(jí)和安全性進(jìn)行決策。此外,該子系統(tǒng)需要完成的任務(wù)包括執(zhí)行任務(wù)規(guī)劃所產(chǎn)生的任務(wù)、對(duì)異常情況的響應(yīng)、識(shí)別駕駛環(huán)境、道路、車(chē)路的相互作用和區(qū)域。此外,這些駕駛環(huán)境對(duì)應(yīng)不同的行為策略,包括行車(chē)車(chē)道、路口處理和實(shí)現(xiàn)區(qū)域構(gòu)成。規(guī)劃系統(tǒng)的第三層是一個(gè)由軌跡生成、道路導(dǎo)航和區(qū)域?qū)Ш浇M成的運(yùn)動(dòng)規(guī)劃子系統(tǒng)。該層負(fù)責(zé)執(zhí)行從行為規(guī)劃子系統(tǒng)中獲得的當(dāng)前運(yùn)動(dòng)目標(biāo)。在一般情況下,該子系統(tǒng)生成一個(gè)朝向目標(biāo)的路徑并跟蹤此路徑。
2.3、斯坦福大學(xué)——Junior
斯坦福大學(xué)的智能汽車(chē)研究小組一直是世界上最有經(jīng)驗(yàn)和最成功的研究實(shí)驗(yàn)室之一。為了更好地研究和促進(jìn)自動(dòng)駕駛智能汽車(chē)的應(yīng)用,大眾汽車(chē)集團(tuán)成立了大眾汽車(chē)創(chuàng)新實(shí)驗(yàn)室(VAIL)。直到現(xiàn)在,斯坦福大學(xué)與大眾集團(tuán)協(xié)作制造了幾輛智能汽車(chē),Stanley(自動(dòng)駕駛的大眾途銳,它在2005年獲得了DARPA舉辦的Grand Challenges的冠軍[ 10 ]),Junior(自動(dòng)駕駛的大眾帕薩特,它在2007年獲得了DARPA舉辦的Urban Challenge的亞軍[ 14 ])。此外,谷歌應(yīng)用了Stanley授權(quán)的傳感技術(shù),繪制出世界各地的三維數(shù)字城市。我們將介紹參加了2007年Urban Challenge的Junior。Junior [ 14 ],如圖2.2所示,是一個(gè)由2006款大眾帕薩特旅行車(chē)改裝來(lái)的改裝車(chē),它配備了五個(gè)激光測(cè)距儀、一個(gè)GPS / INS,五個(gè)雷達(dá)、兩個(gè)英特爾四核電腦系統(tǒng)和一個(gè)自定義驅(qū)動(dòng)的有線接口。因此,這輛車(chē)是能夠檢測(cè)到距離車(chē)輛120米遠(yuǎn)的障礙物。
圖2.2、斯坦福大學(xué)的 Junior 智能汽車(chē),它奪得了2007年DARPA舉辦的Urban Challenge的亞軍(由斯坦福大學(xué)研制)
Junior的軟件架構(gòu)設(shè)計(jì)為一個(gè)由五個(gè)模塊組成的數(shù)據(jù)驅(qū)動(dòng)渠道:
l 傳感器接口:該接口為其他模塊提供數(shù)據(jù)。
l 感知模塊:這些模塊將傳感器數(shù)據(jù)分為移動(dòng)車(chē)輛和靜態(tài)障礙物,并且還提供在數(shù)字地圖上的精確位置。
l 導(dǎo)航模塊:這些模塊包括運(yùn)動(dòng)規(guī)劃,一個(gè)分層的有限狀態(tài)機(jī)和制定車(chē)輛的控制命令。
l 驅(qū)動(dòng)線接口:此接口接收來(lái)自導(dǎo)航模塊的控制命令,并能控制油門(mén)、剎車(chē)、方向盤(pán)、換檔、轉(zhuǎn)向燈和緊急剎車(chē)。
l 全球服務(wù):系統(tǒng)可提供日志記錄、時(shí)間戳、消息傳遞和看門(mén)狗功能,以保證系統(tǒng)的可靠運(yùn)行。
此外,我們介紹了三個(gè)基本模塊:環(huán)境感知、精度定位和導(dǎo)航。在感知模塊有兩個(gè)基本功能:靜態(tài)/動(dòng)態(tài)障礙物檢測(cè)和跟蹤和RNDF定位和更新,它用激光技術(shù)進(jìn)行初步掃描和一個(gè)用于交叉口預(yù)警移動(dòng)物體的雷達(dá)系統(tǒng)進(jìn)行補(bǔ)充掃描。在感知交通環(huán)境之后,Junior可以在RNDF形成的數(shù)字地圖和從本地傳感器得到的當(dāng)前位置之間進(jìn)行局部調(diào)整。在導(dǎo)航模塊中,第一個(gè)任務(wù)是規(guī)劃全局路徑,這里有兩種導(dǎo)航,道路導(dǎo)航和自由式導(dǎo)航。然而,基本導(dǎo)航模塊不包括交叉口。此外,Junior努力防止自己被卡在行為層次。
如今,斯坦福大學(xué)的研究人員仍為在停車(chē)場(chǎng)自動(dòng)停車(chē)和自動(dòng)停車(chē)場(chǎng)工作著。
2.4、弗吉尼亞理工學(xué)院——Odin
弗吉尼亞理工大學(xué)的VictorTango團(tuán)隊(duì)和TORC技術(shù)團(tuán)隊(duì)開(kāi)發(fā)了Odin [ 2 ],它在2004年獲得了DARPA舉辦的Grand Challenges的第三名。Odin主要由三部分組成:基礎(chǔ)車(chē)身、感知模塊和規(guī)劃模塊。
現(xiàn)在,我們介紹的是基礎(chǔ)車(chē)身。Odin是由2005混合型福特汽車(chē)改裝而來(lái),圖2.3所示。它的主要計(jì)算平臺(tái)是一對(duì)高壓服務(wù)器,每一個(gè)高壓服務(wù)器都有2個(gè)四核處理器。
圖2.3、VictorTango團(tuán)隊(duì)開(kāi)發(fā)的智能汽車(chē)Odin(由弗吉尼亞理工學(xué)院研制)
在感知模塊中,有三個(gè)子模塊:目標(biāo)分類(lèi)模塊、定位模塊和道路檢測(cè)模塊。在這里,目標(biāo)分類(lèi)模塊首先檢測(cè)到障礙,然后將它們歸類(lèi)為靜態(tài)或動(dòng)態(tài)。定位模塊在3D世界中得到車(chē)輛的位置和方向。路面檢測(cè)模塊提取道路覆蓋地圖和行車(chē)位置。
規(guī)劃模塊采用一種混合的協(xié)商反應(yīng)模型。它作為獨(dú)立的組件,由上層決策和低層反應(yīng)組成。路線規(guī)劃是規(guī)劃的第一步,它負(fù)責(zé)規(guī)劃車(chē)輛應(yīng)該行駛的路段和區(qū)域。駕駛行為組件保證車(chē)輛遵守道路規(guī)則。運(yùn)動(dòng)規(guī)劃負(fù)責(zé)將控制指令轉(zhuǎn)換為執(zhí)行器的控制信號(hào)。
2.5、麻省理工學(xué)院——Talos
團(tuán)隊(duì)MIT開(kāi)發(fā)了一個(gè)在城市內(nèi)自動(dòng)駕駛的汽車(chē),名為T(mén)alos(見(jiàn)圖2.4)[ 1,9,13 ]。它有三個(gè)關(guān)鍵的新特點(diǎn):(i)基于北斗導(dǎo)航認(rèn)知策略;(ii)一個(gè)統(tǒng)一的規(guī)劃和控制結(jié)構(gòu);(iii)一個(gè)強(qiáng)大的新的軟件基礎(chǔ)設(shè)施。此外,該車(chē)有以下幾個(gè)子模塊:道路油漆探測(cè)器、導(dǎo)航儀、車(chē)道跟蹤器、操控性地圖、障礙物檢測(cè)裝置、運(yùn)動(dòng)規(guī)劃、高速車(chē)輛檢測(cè)器、控制器和定位模塊。感知模塊包括障礙物檢測(cè)裝置、危險(xiǎn)探測(cè)器和車(chē)道跟蹤模塊。規(guī)劃控制算法涉及導(dǎo)航儀、操控性地圖、運(yùn)動(dòng)規(guī)劃和一個(gè)控制器。導(dǎo)航儀在行駛?cè)蝿?wù)中起重要作用,以及在復(fù)雜驅(qū)動(dòng)條件下其余模塊緊密配合的控制目標(biāo)產(chǎn)生預(yù)期的運(yùn)動(dòng)。
圖2.4、由團(tuán)隊(duì)MIT開(kāi)發(fā)的智能汽車(chē)Talos(由麻省理工學(xué)院研制)
圖2.5、團(tuán)隊(duì)Cornell開(kāi)發(fā)的智能汽車(chē)Skynet(由康奈爾大學(xué)研制)
2.6、康奈爾大學(xué)——Skynet
Cornell團(tuán)隊(duì)的Skynet是一種改進(jìn)的雪佛蘭Tahoe,如圖2.5所示,有兩組傳感器[ 15 ]。一組用于檢測(cè)車(chē)輛本身,和另一組(激光,雷達(dá)和視覺(jué))是用于感測(cè)環(huán)境。由于上述傳感器的作用,Skynet能夠提供實(shí)時(shí)的位置、速度和姿態(tài)的絕對(duì)定位。此外,Skynet的當(dāng)?shù)氐貓D包括障礙物檢測(cè)信息組成Skynet的周?chē)h(huán)境圖。在復(fù)雜的環(huán)境中,有許多情況是超過(guò)其自動(dòng)駕駛的基本避障能力的。因此,以車(chē)輛為中心的本地地圖是不能滿足絕對(duì)定位的。所以我們需要采用后位姿態(tài)跟蹤發(fā)生器算法來(lái)評(píng)估環(huán)境結(jié)構(gòu)。
在規(guī)則的道路網(wǎng)絡(luò)的背景下,Skynet他將使用概率表示規(guī)劃任務(wù)路徑。一個(gè)智能規(guī)劃包括三個(gè)主要層次:行為層,戰(zhàn)術(shù)層和操作層。行為層的任務(wù)是確定到下一個(gè)任務(wù)點(diǎn)的最快路線。當(dāng)在行為層中存在狀態(tài)轉(zhuǎn)換時(shí),執(zhí)行戰(zhàn)術(shù)層中的相應(yīng)部分。在這四個(gè)戰(zhàn)術(shù)組成部分中,道路戰(zhàn)術(shù)部分負(fù)責(zé)完成尋找一個(gè)合適的車(chē)道和在同一個(gè)和周邊的車(chē)道監(jiān)控物體。交叉口戰(zhàn)術(shù)部分負(fù)責(zé)處理交叉口排隊(duì)行為和安全會(huì)車(chē)。在沒(méi)有障礙的情況下,區(qū)域戰(zhàn)術(shù)部分需要負(fù)責(zé)基本導(dǎo)航。阻塞戰(zhàn)術(shù)部分實(shí)現(xiàn)了障礙檢測(cè)功能,它能判斷是否有臨時(shí)的交通擁堵并采取相應(yīng)的行動(dòng)。最后一層是控制方向盤(pán)、油門(mén)、剎車(chē)、本地駕駛邊界和執(zhí)行器的參考速度的操作層。
2.7、賓夕法尼亞大學(xué)和里海大學(xué)——Little Ben
Little Ben由Ben Franklin Racing Team為了2007 年DARPA舉辦的Urban Challenge而設(shè)計(jì)的一種使用了各種傳感器和計(jì)算機(jī)改進(jìn)的豐田普銳斯 [ 4 ],如圖2.6所示。類(lèi)似于其他智能汽車(chē),Little Ben配備各種傳感器,如三個(gè)LMS291,兩個(gè)SICK LDRS和一個(gè)Bumble bee立體相機(jī)。傳感器陣列能及時(shí)提供周?chē)h(huán)境的信息,信息通過(guò)環(huán)境感知和建模被集成到一個(gè)動(dòng)態(tài)的地圖上。
Little Ben的軟件框架包括感知模塊、規(guī)劃模塊和控制模塊。感知模塊負(fù)責(zé)提供靜態(tài)障礙物、移動(dòng)車(chē)輛、車(chē)道標(biāo)線和行駛路面的信息。Little Ben的中長(zhǎng)距離激光雷達(dá)負(fù)責(zé)對(duì)幾何障礙、地面道路進(jìn)行提取、分類(lèi),還負(fù)責(zé)跟蹤動(dòng)態(tài)障礙物。此外,立體視覺(jué)系統(tǒng)是用來(lái)檢測(cè)附近車(chē)道標(biāo)線的。一旦感知模塊生成靜態(tài)障礙物、動(dòng)態(tài)障礙物或車(chē)道標(biāo)線信息,該地圖規(guī)劃模塊將把更新的障礙和車(chē)道標(biāo)線標(biāo)記在以當(dāng)前車(chē)輛位置為中心的地圖上。任務(wù)和路徑規(guī)劃包括2個(gè)階段。第一階段是通過(guò)最小化任務(wù)時(shí)間來(lái)計(jì)算可選路徑。下一個(gè)階段是將動(dòng)態(tài)地圖映射到新的路徑規(guī)劃上。之后,路徑跟蹤模塊負(fù)責(zé)計(jì)算控制車(chē)輛的轉(zhuǎn)向和油門(mén)制動(dòng),以遵循所需的軌跡。
圖2.6、智能車(chē)輛Little Ben(由賓夕法尼亞大學(xué)和利哈伊大學(xué)發(fā)表)
收藏