Salva tu Playa

La playa de los Llanos está junto al Palmetum y al Parque Marítimo de Santa Cruz de Tenerife capital.

Salva tu Playa

Por playas más limpias y cuidadas, sin ruidos ni molestias.

Salva tu Playa

Por la recuperación de playas.

Salva tu Playa

Recogida de firmas online . Apoya y podrás defenderla mañana.

Por la recuperación de accesos al baño y el disfrute completo del mar

Recogida de firmas online. Apoya y defiende tu playa. Por una playa para todos.

miércoles, 18 de diciembre de 2019

The race to develop renewable energy technologies

In the early 20th century, just as electric grids were starting to transform daily life, an unlikely advocate for renewable energy voiced his concerns about burning fossil fuels. Thomas Edison expressed dismay over using combustion instead of renewable resources in a 1910 interview for Elbert Hubbard’s anthology, “Little Journeys to the Homes of the Great.”

“This scheme of combustion to get power makes me sick to think of — it is so wasteful,” Edison said. “You see, we should utilize natural forces and thus get all of our power. Sunshine is a form of energy, and the winds and the tides are manifestations of energy. Do we use them? Oh, no! We burn up wood and coal, as renters burn up the front fence for fuel.”

Over a century later, roughly 80 percent of global energy consumption still comes from burning fossil fuels. As the impact of climate change on the environment becomes increasingly drastic, there is a mounting sense of urgency for researchers and engineers to develop scalable renewable energy solutions.

“Even 100 years ago, Edison understood that we cannot replace combustion with a single alternative,” adds Reshma Rao PhD '19, a postdoc in MIT’s Electrochemical Energy Lab who included Edison’s quote in her doctoral thesis. “We must look to different solutions that might vary temporally and geographically depending on resource availability.”

Rao is one of many researchers across MIT’s Department of Mechanical Engineering who have entered the race to develop energy conversion and storage technologies from renewable sources such as wind, wave, solar, and thermal.

Harnessing energy from waves

When it comes to renewable energy, waves have other resources beat in two respects. First, unlike solar, waves offer a consistent energy source regardless of time of day. Second, waves provide much greater energy density than wind due to water’s heavier mass.

Despite these advantages, wave-energy harvesting is still in its infancy. Unlike wind and solar, there is no consensus in the field of wave hydrodynamics on how to efficiently capture and convert wave energy. Dick K.P. Yue, Philip J. Solondz Professor of Engineering, is hoping to change that.

“My group has been looking at new paradigms,” explains Yue. “Rather than tinkering with small improvements, we want to develop a new way of thinking about the wave-energy problem.”

One aspect of that paradigm is determining the optimal geometry of wave-energy converters (WECs). Graduate student Emma Edwards has been developing a systematic methodology to determine what kind of shape WECs should be.

“If we can optimize the shape of WECs for maximizing extractable power, wave energy could move significantly closer to becoming an economically viable source of renewable energy,” says Edwards. 

Another aspect of the wave-energy paradigm Yue’s team is working on is finding the optimal configuration for WECs in the water. Grgur Tokić PhD '16, an MIT alum and current postdoc working in Yue’s group, is building a case for optimal configurations of WECs in large arrays, rather than as stand-alone devices.

Before being placed in the water, WECs are tuned for their particular environment. This tuning involves considerations like predicted wave frequency and prevailing wind direction. According to Tokić and Yue, if WECs are configured in an array, this tuning could occur in real time, maximizing energy-harvesting potential.

In an array, “sentry” WECs could gather measurements about waves such as amplitude, frequency, and direction. Using wave reconstructing and forecasting, these WECs could then communicate information about conditions to other WECs in the array wirelessly, enabling them to tune minute-by-minute in response to current wave conditions.

“If an array of WECs can tune fast enough so they are optimally configured for their current environment, now we are talking serious business,” explains Yue. “Moving toward arrays opens up the possibilities of significant advances and gains many-times-over non-interacting, isolated devices.”

By examining the optimal size and configuration of WECs using theoretical and computational methods, Yue’s group hopes to develop potentially game-changing frameworks for harnessing the power of waves.

Accelerating the discovery of photovoltaics

The amount of solar energy that reaches the Earth’s surface offers a tantalizing prospect in the quest for renewable energy. Every hour, an estimated 430 quintillion joules of energy is delivered to Earth from the sun. That’s the equivalent of one year’s worth of global energy consumption by humans.

Tonio Buonassisi, professor of mechanical engineering, has dedicated his entire career to developing technologies that harness this energy and convert it into usable electricity. But time, he says, is of the essence. “When you consider what we are up against in terms of climate change, it becomes increasingly clear we are running out of time,” he says.

For solar energy to have a meaningful impact, according to Buonassisi, researchers need to develop solar cell materials that are efficient, scalable, cost-effective, and reliable. These four variables pose a challenge for engineers — rather than develop a material that satisfies just one of these factors, they need to create one that ticks off all four boxes and can be moved to market as quickly as possible. “If it takes us 75 years to get a solar cell that does all of these things to market, it’s not going to help us solve this problem. We need to get it to market in the next five years,” Buonassisi adds.

To accelerate the discovery and testing of new materials, Buonassisi’s team has developed a process that uses a combination of machine learning and high-throughput experimentation — a type of experimentation that enables a large quantity of materials to be screened at the same time. The result is a 10-fold increase in the speed of discovery and analysis for new solar cell materials.

“Machine learning is our navigational tool,” explains Buonassisi. “It can de-bottleneck the cycle of learning so we can grind through material candidates and find one that satisfies all four variables.”

Shijing Sun, a research scientist in Buonassisi’s group, used a combination of machine learning and high-throughput experiments to quickly assess and test perovskite solar cells.

“We use machine learning to accelerate the materials discovery, and developed an algorithm that directs us to the next sampling point and guides our next experiment,” Sun says. Previously, it would take three to five hours to classify a set of solar cell materials. The machine learning algorithm can classify materials in just five minutes.

Using this method, Sun and Buonassisi made 96 tested compositions. Of those, two perovskite materials hold promise and will be tested further.

By using machine learning as a tool for inverse design, the research team hopes to assess thousands of compounds that could lead to the development of a material that enables the large-scale adoption of solar energy conversion. “If in the next five years we can develop that material using the set of productivity tools we’ve developed, it can help us secure the best possible future that we can,” adds Buonassisi.

New materials to trap heat

While Buonassisi’s team is focused on developing solutions that directly convert solar energy into electricity, researchers including Gang Chen, Carl Richard Soderberg Professor of Power Engineering, are working on technologies that convert sunlight into heat. Thermal energy from the heat is then used to provide electricity.

“For the past 20 years, I’ve been working on materials that convert heat into electricity,” says Chen. While much of this materials research is on the nanoscale, Chen and his team at the NanoEngineering Group are no strangers to large-scale experimental systems. They previously built a to-scale receiver system that used concentrating solar thermal power (CSP).

In CSP, sunlight is used to heat up a thermal fluid, such as oil or molten salt. That fluid is then either used to generate electricity by running an engine, such as a steam turbine, or stored for later use.

Over the course of a four-year project funded by the U.S. Department of Energy, Chen’s team built a CSP receiver at MIT’s Bates Research and Engineering Center in Middleton, Massachusetts. They developed the Solar Thermal Aerogel Receiver — nicknamed STAR.

The system relied on mirrors known as Fresnel reflectors to direct sunlight to pipes containing thermal fluid. Typically, for fluid to effectively trap the heat generated by this reflected sunlight, it would need to be encased in a high-cost vacuum tube. In STAR, however, Chen’s team utilized a transparent aerogel that can trap heat at incredibly high temperatures — removing the need for expensive vacuum enclosures. While letting in over 95 percent of the incoming sunlight, the aerogel retains its insulating properties, preventing heat from escaping the receiver.

In addition to being more efficient than traditional vacuum receivers, the aerogel receivers enabled new configurations for the CSP solar reflectors. The reflecting mirrors were flatter and more compact than conventionally used parabolic receivers, resulting in a savings of material. 

“Cost is everything with energy applications, so the fact STAR was cheaper than most thermal energy receivers, in addition to being more efficient, was important,” adds Svetlana Boriskina, a research scientist working on Chen’s team. 

After the conclusion of the project in 2018, Chen and Wang have continued their collaboration to explore solar thermal applications for the aerogel material used in STAR. They recently used the aerogel in a device that contained a heat-absorbing material. When placed on a roof on MIT’s campus, the heat-absorbing material, which was covered by a layer of the aerogel, reached an amazingly high temperature of 220 degrees Celsius. The outside air temperature, for comparison, was a chilly 0 C. Unlike STAR, this new system doesn’t require Fresnel reflectors to direct sunlight to the thermal material.

“Our latest work using the aerogel enables sunlight concentration without focusing optics to harness thermal energy,” explains Chen. “If you aren’t using focusing optics, you can develop a system that is easier to use and cheaper than traditional receivers.”

The aerogel device could potentially be further developed into a system that powers heating and cooling systems in homes.

Solving the storage problem

While CSP receivers like STAR offer some energy storage capabilities, there is a push to develop more robust energy storage systems for renewable technologies. Storing energy for later use when resources aren’t supplying a consistent stream of energy — for example, when the sun is covered by clouds, or there is little-to-no wind — will be crucial for the adoption of renewable energy on the grid. To solve this problem, researchers are developing new storage technologies.  

Asegun Henry, Robert N. Noyce Career Development Professor, who like Chen has developed CSP technologies, has created a new storage system that has been dubbed “sun in a box.” Using two tanks, excess energy can be stored in white-hot molten silicon. When this excess energy is needed, mounted photovoltaic cells can be actuated into place to convert the white-hot light from the silicon back into electricity.

“It’s a true battery that can work with any type of energy conversion,” adds Henry.

Betar Gallant, ABS Career Development Professor, meanwhile, is exploring ways to improve the energy density of today’s electrochemical batteries by designing new storage materials that are more cost-effective and versatile for storing cleanly generated energy. Rather than develop these materials using metals that are extracted through energy-intensive mining, she aims to build batteries using more earth-abundant materials.

“Ideally, we want to create a battery that can match the irregular supply of solar or wind energy that peak at different times without degrading, as today’s batteries do” explains Gallant.

In addition to working on lithium-ion batteries, like Gallant, Yang Shao-Horn, W.M. Keck Professor of Energy, and postdoc Reshma Rao are developing technologies that can directly convert renewable energy to fuels.

“If we want to store energy at scale going beyond lithium ion batteries, we need to use resources that are abundant,” Rao explains. In their electrochemical technology, Rao and Shao-Horn utilize one of the most abundant resources — liquid water.

Using an active catalyst and electrodes, water is split into hydrogen and oxygen in a series of chemical reactions. The hydrogen becomes an energy carrier and can be stored for later use in a fuel cell. To convert the energy stored in the hydrogen back into electricity, the reactions are reversed. The only by-product of this reaction is water.  

“If we can get and store hydrogen sustainably, we can basically electrify our economy using renewables like wind, wave, or solar,” says Rao.

Rao has broken down every fundamental reaction that takes place within this process. In addition to focusing on the electrode-electrolyte interface involved, she is developing next-generation catalysts to drive these reactions.  

“This work is at the frontier of the fundamental understanding of active sites catalyzing water splitting for hydrogen-based fuels from solar and wind to decarbonize transport and industry,” adds Shao-Horn.

Securing a sustainable future

While shifting from a grid powered primarily by fossil fuels to a grid powered by renewable energy seems like a herculean task, there have been promising developments in the past decade. A report released prior to the UN Global Climate Action Summit in September showed that, thanks to $2.6 trillion of investment, renewable energy conversion has quadrupled since 2010.

In a statement after the release of the report, Inger Andersen, executive director of the UN Environment Program, stressed the correlation between investing in renewable energy and securing a sustainable future for humankind. “It is clear that we need to rapidly step up the pace of the global switch to renewables if we are to meet international climate and development goals,” Andersen said.

No single conversion or storage technology will be responsible for the shift from fossil fuels to renewable energy. It will require a tapestry of complementary solutions from researchers both here at MIT and across the globe.



from MIT News - Oceanography and ocean engineering https://ift.tt/35ANWMm

lunes, 9 de diciembre de 2019

Intelligent Towing Tank propels human-robot-computer research

In its first year of operation, the Intelligent Towing Tank (ITT) conducted about 100,000 total experiments, essentially completing the equivalent of a PhD student’s five years’ worth of experiments in a matter of weeks.

The automated experimental facility, developed in the MIT Sea Grant Hydrodynamics Laboratory, automatically and adaptively performs, analyzes, and designs experiments exploring vortex-induced vibrations (VIVs). Important for engineering offshore ocean structures like marine drilling risers that connect underwater oil wells to the surface, VIVs remain somewhat of a phenomenon to researchers due to the high number of parameters involved.

Guided by active learning, the ITT conducts series of experiments wherein the parameters of each next experiment are selected by a computer. Using an “explore-and-exploit” methodology, the system dramatically reduces the number of experiments required to explore and map the complex forces governing VIVs.

What began as then-PhD candidate Dixia Fan’s quest to cut back on conducting a thousand or so laborious experiments — by hand — led to the design of the innovative system and a paper recently published in the journal Science Robotics.

Fan, now a postdoc, and a team of researchers from the MIT Sea Grant College Program and MIT’s Department of Mechanical Engineering, École Normale Supérieure de Rennes, and Brown University, reveal a potential paradigm shift in experimental research, where humans, computers, and robots can collaborate more effectively to accelerate scientific discovery.

The 33-foot whale of a tank comes alive, working without interruption or supervision on the venture at hand — in this case, exploring a canonical problem in the field of fluid-structure interactions. But the researchers envision applications of the active learning and automation approach to experimental research across disciplines, potentially leading to new insights and models in multi-input/multi-output nonlinear systems.

VIVs are inherently-nonlinear motions induced on a structure in an oncoming irregular cross-stream, which prove vexing to study. The researchers report that the number of experiments completed by the ITT is already comparable to the total number of experiments done to date worldwide on the subject of VIVs.

The reason for this is the large number of independent parameters, from flow velocity to pressure, involved in studying the complex forces at play. According to Fan, a systematic brute-force approach — blindly conducting 10 measurements per parameter in an eight-dimensional parametric space — would require 100 million experiments.

With the ITT, Fan and his collaborators have taken the problem into a wider parametric space than previously practicable to explore. “If we performed traditional techniques on the problem we studied,” he explains, “it would take 950 years to finish the experiment.” Clearly infeasible, so Fan and the team integrated a Gaussian process regression learning algorithm into the ITT. In doing so, the researchers reduced the experimental burden by several orders of magnitude, requiring only a few thousand experiments.

The robotic system automatically conducts an initial sequence of experiments, periodically towing a submerged structure along the length of the tank at a constant velocity. Then, the ITT takes partial control over the parameters of each next experiment by minimizing suitable acquisition functions of quantified uncertainties and adapting to achieve a range of objectives, like reduced drag.

Earlier this year, Fan was awarded an MIT Mechanical Engineering de Florez Award for "Outstanding Ingenuity and Creative Judgment" in the development of the ITT. “Dixia’s design of the Intelligent Towing Tank is an outstanding example of using novel methods to reinvigorate mature fields,” says Michael Triantafyllou, Henry L. and Grace Doherty Professor in Ocean Science and Engineering, who acted as Fan’s doctoral advisor.

Triantafyllou, a co-author on this paper and the director of the MIT Sea Grant College Program, says, “MIT Sea Grant has committed resources and funded projects using deep-learning methods in ocean-related problems for several years that are already paying off.” Funded by the National Oceanic and Atmospheric Administration and administered by the National Sea Grant Program, MIT Sea Grant is a federal-Institute partnership that brings the research and engineering core of MIT to bear on ocean-related challenges.

Fan’s research points to a number of others utilizing automation and artificial intelligence in science: At Caltech, a robot scientist named “Adam” generates and tests hypotheses; at the Defense Advanced Research Projects Agency, the Big Mechanism program reads tens of thousands of research papers to generate new models.

Similarly, the ITT applies human-computer-robot collaboration to accelerate experimental efforts. The system demonstrates a potential paradigm shift in conducting research, where automation and uncertainty quantification can considerably accelerate scientific discovery. The researchers assert that the machine learning methodology described in this paper can be adapted and applied in and beyond fluid mechanics, to other experimental fields.

Other contributors to the paper include George Karniadakis from Brown University, who is also affiliated with MIT Sea Grant; Gurvan Jodin from ENS Rennes; MIT PhD candidate in mechanical engineering Yu Ma; and Thomas Consi, Luca Bonfiglio, and Lily Keyes from MIT Sea Grant.

This work was supported by DARPA, Fariba Fahroo, and Jan Vandenbrande through an EQUiPS (Enabling Quantification of Uncertainty in Physical Systems) grant, as well as Shell, Subsea 7, and the MIT Sea Grant College Program.



from MIT News - Oceanography and ocean engineering https://ift.tt/2E2dzcU

viernes, 6 de diciembre de 2019

Understanding the impact of deep-sea mining

Resting atop Thomas Peacock’s desk is an ordinary-looking brown rock. Roughly the size of a potato, it has been at the center of decades of debate. Known as a polymetallic nodule, it spent 10 million years sitting on the deep seabed, 15,000 feet below sea level. The nodule contains nickel, cobalt, copper, and manganese — four minerals that are essential in energy storage.

“As society moves toward driving more electric vehicles and utilizing renewable energy, there will be an increased demand for these minerals, to manufacture the batteries necessary to decarbonize the economy,” says Peacock, a professor of mechanical engineering and the director of MIT’s Environmental Dynamics Lab (END Lab). He is part of an international team of researchers that has been trying to gain a better understanding the environmental impact of collecting polymetallic nodules, a process known as deep-sea mining.

The minerals found in the nodules, particularly cobalt and nickel, are key components of lithium-ion batteries. Currently, lithium-ion batteries offer the best energy density of any commercially available battery. This high energy density makes them ideal for use in everything from cellphones to electric vehicles, which require large amounts of energy within a compact space.

“Those two elements are expected to see a tremendous growth in demand due to energy storage,” says Richard Roth, director of MIT’s Materials Systems Laboratory.

While researchers are exploring alternative battery technologies such as sodium-ion batteries and flow batteries that utilize electrochemical cells, these technologies are far from commercialization.

“Few people expect any of these lithium-ion alternatives to be available in the next decade,” explains Roth. “Waiting for unknown future battery chemistries and technologies could significantly delay widespread adoption of electric vehicles.”

Vast amounts of specialty nickel will be also needed to build larger-scale batteries that will be required as societies look to shift from an electric grid powered by fossil fuels to one powered by renewable resources like solar, wind, wave, and thermal.

“The collection of nodules from the seabed is being considered as a new means for getting these materials, but before doing so it is imperative to fully understand the environmental impact of mining resources from the deep ocean and compare it to the environmental impact of mining resources on land,” explains Peacock.

After receiving seed funding from MIT’s Environmental Solutions Initiative (ESI), Peacock was able to apply his expertise in fluid dynamics to study how deep-sea mining could affect surrounding ecosystems.

Meeting the demand for energy storage

Currently, nickel and cobalt are extracted through land-based mining operations. Much of this mining occurs in the Democratic Republic of the Congo, which produces 60 percent of the world’s cobalt. These land-based mines often impact surrounding environments through the destruction of habitats, erosion, and soil and water contamination. There are also concerns that land-based mining, especially in politically unstable countries, might not be able to supply enough of these materials as the demand for batteries rises.

The swath of ocean located between Hawaii and the West Coast of the United States — also  known as the Clarion Clipperton Fracture Zone — is estimated to possess six times more cobalt and three times more nickel than all known land-based stores, as well as vast deposits of manganese and a substantial amount of copper.

While the seabed is abundant with these materials, little is known about the short- and long-term environmental effects of mining 15,000 feet below sea level. Peacock and his collaborator Professor Matthew Alford from the Scripps Institution of Oceanography and the University of California at San Diego are leading the quest to understand how the sediment plumes generated by the collection of nodules from the seabed will be carried by water currents.

“The key question is, if we decide to make a plume at site A, how far does it spread before eventually raining down on the sea floor?” explains Alford. “That ability to map the geography of the impact of sea floor mining is a crucial unknown right now.”

The research Peacock and Alford are conducting will help inform stakeholders about the potential environmental effects of deep-sea mining. One pressing matter is that draft exploitation regulations for deep-sea mining in areas beyond national jurisdiction are currently being negotiated by the International Seabed Authority (ISA), an independent organization established by the United Nations that regulates all mining activities on the sea floor. Peacock and Alford’s research will help guide the development of environmental standards and guidelines to be issued under those regulations.

“We have a unique opportunity to help regulators and other concerned parties to assess draft regulations using our data and modeling, before operations start and we regret the impact of our activity,” says Carlos Munoz Royo, a PhD student in MIT’s END Lab.

Tracking plumes in the water

In deep-sea mining, a collector vehicle would be deployed from a ship. The collector vehicle then travels 15,000 feet down to the seabed, where it vacuums up the top four inches of the seabed. This process creates a plume known as a collector plume.

“As the collector moves across the seabed floor, it stirs up sediment and creates a sediment cloud, or plume, that’s carried away and distributed by ocean currents,” explains Peacock.

The collector vehicle picks up the nodules, which are pumped through a pipe back to the ship. On the ship, usable nodules are separated from unwanted sediment. That sediment is piped back into the ocean, creating a second plume, known as a discharge plume.

Peacock collaborated with Pierre Lermusiaux, professor of mechanical engineering and of ocean science and engineering, and Glenn Flierl, professor of Earth, atmospheric, and planetary sciences, to create mathematical models that predict how these two plumes travel through the water.

To test these models, Peacock set out to track actual plumes created by mining the floor of the Pacific Ocean. With funding from MIT ESI, he embarked on the first-ever field study of such plumes. He was joined by Alford and Eric Adams, senior research engineer at MIT, as well as other researchers and engineers from MIT, Scripps, and the United States Geological Survey.

With funding from the UC Ship Funds Program, the team conducted experiments in consultation with the ISA during a weeklong expedition in the Pacific Ocean aboard the U.S. Navy R/V Sally Ride in March 2018. The researchers mixed sediment with a tracer dye that they were able to track using sensors on the ship developed by Alford’s Multiscale Ocean Dynamics group. In doing so, they created a map of the plumes’ journeys.

The field experiments demonstrated that the models Peacock and Lermusiaux developed can be used to predict how plumes will travel through the water — and could help give a clearer picture of how surrounding biology might be affected.

Impact on deep-sea organisms

Life on the ocean floor moves at a glacial pace. Sediment accumulates at a rate of 1 millimeter every millennium. With such a slow rate of growth, areas disturbed by deep-sea mining would be unlikely to recover on a reasonable timescale.


“The concern is that if there is a biological community specific to the area, it might be irretrievably impacted by mining,” explains Peacock. 

According to Cindy Van Dover, professor of biological oceanography at Duke University, in addition to organisms that live in or around the nodules, other organisms elsewhere in the water column could be affected as the plumes travel.

“There could be clogging of filter feeding structures of, for example, gelatinous organisms in the water column, and burial of organisms on the sediment,” she explains. “There could also be some metals that get into the water column, so there are concerns about toxicology.”

Peacock’s research on plumes could help biologists like Van Dover assess collateral damage from deep-sea mining operations in surrounding ecosystems.

Drafting regulations for mining the sea

Through connections with MIT’s Policy Lab, the Institute is one of only two research universities with observer status at the ISA.

“The plume research is very important, and MIT is helping with the experimentation and developing plume models, which is vital to inform the current work of the International Seabed Authority and its stakeholder base,” explains Chris Brown, a consultant at the ISA. Brown was one of dozens of experts who convened on MIT’s campus last fall at a workshop discussing the risks of deep-sea mining.

To date, the field research Peacock and Alford conducted is the only ocean dataset on midwater plumes that exists to help guide decision-making. The next step in understanding how plumes move through the water will be to track plumes generated by a prototype collector vehicle. Peacock and his team in the END Lab are preparing to participate in a major field study using a prototype vehicle in 2020.

Thanks to recent funding provided by the 11th Hour Project, Peacock and Lermusiaux hope to develop models that give increasingly accurate predictions about how deep-sea mining plumes will travel through the ocean. They will continue to interact with academic colleagues, international agencies, NGOs, and contractors to develop a clearer picture of deep-sea mining’s environmental impact.

“It’s important to have input from all stakeholders early in the conversation to help make informed decisions, so we can fully understand the environmental impact of mining resources from the ocean and compare it to the environmental impact of mining resources on land,” says Peacock.



from MIT News - Oceanography and ocean engineering https://ift.tt/2LHGoj5

lunes, 4 de noviembre de 2019

Autonomous system improves environmental sampling at sea

An autonomous robotic system invented by researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) efficiently sniffs out the most scientifically interesting — but hard-to-find — sampling spots in vast, unexplored waters.

Environmental scientists are often interested in gathering samples at the most interesting locations, or “maxima,” in an environment. One example could be a source of leaking chemicals, where the concentration is the highest and mostly unspoiled by external factors. But a maximum can be any quantifiable value that researchers want to measure, such as water depth or parts of coral reef most exposed to air.

Efforts to deploy maximum-seeking robots suffer from efficiency and accuracy issues. Commonly, robots will move back and forth like lawnmowers to cover an area, which is time-consuming and collects many uninteresting samples. Some robots sense and follow high-concentration trails to their leak source. But they can be misled. For example, chemicals can get trapped and accumulate in crevices far from a source. Robots may identify those high-concentration spots as the source yet be nowhere close.

In a paper being presented at the International Conference on Intelligent Robots and Systems (IROS), the researchers describe “PLUMES,” a system that enables autonomous mobile robots to zero in on a maximum far faster and more efficiently. PLUMES leverages probabilistic techniques to predict which paths are likely to lead to the maximum, while navigating obstacles, shifting currents, and other variables. As it collects samples, it weighs what it’s learned to determine whether to continue down a promising path or search the unknown — which may harbor more valuable samples.

Importantly, PLUMES reaches its destination without ever getting trapped in those tricky high-concentration spots. “That’s important, because it’s easy to think you’ve found gold, but really you’ve found fool’s gold,” says co-first author Victoria Preston, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and in the MIT-WHOI Joint Program.

The researchers built a PLUMES-powered robotic boat that successfully detected the most exposed coral head in the Bellairs Fringing Reef in Barbados — meaning, it was located in the shallowest spot — which is useful for studying how sun exposure impacts coral organisms. In 100 simulated trials in diverse underwater environments, a virtual PLUMES robot also consistently collected seven to eight times more samples of maxima than traditional coverage methods in allotted time frames.

“PLUMES does the minimal amount of exploration necessary to find the maximum and then concentrates quickly on collecting valuable samples there,” says co-first author Genevieve Flaspohler, a PhD student and in CSAIL and the MIT-WHOI Joint Program.

Joining Preston and Flaspohler on the paper are: Anna P.M. Michel and Yogesh Girdhar, both scientists in the Department of Applied Ocean Physics and Engineering at the WHOI; and Nicholas Roy, a professor in CSAIL and in the Department of Aeronautics and Astronautics.  

Navigating an exploit-explore tradeoff

A key insight of PLUMES was using techniques from probability to reason about navigating the notoriously complex tradeoff between exploiting what’s learned about the environment and exploring unknown areas that may be more valuable.

“The major challenge in maximum-seeking is allowing the robot to balance exploiting information from places it already knows to have high concentrations and exploring places it doesn’t know much about,” Flaspohler says. “If the robot explores too much, it won’t collect enough valuable samples at the maximum. If it doesn’t explore enough, it may miss the maximum entirely.”

Dropped into a new environment, a PLUMES-powered robot uses a probabilistic statistical model called a Gaussian process to make predictions about environmental variables, such as chemical concentrations, and estimate sensing uncertainties. PLUMES then generates a distribution of possible paths the robot can take, and uses the estimated values and uncertainties to rank each path by how well it allows the robot to explore and exploit.

At first, PLUMES will choose paths that randomly explore the environment. Each sample, however, provides new information about the targeted values in the surrounding environment — such as spots with highest concentrations of chemicals or shallowest depths. The Gaussian process model exploits that data to narrow down possible paths the robot can follow from its given position to sample from locations with even higher value. PLUMES uses a novel objective function — commonly used in machine-learning to maximize a reward — to make the call of whether the robot should exploit past knowledge or explore the new area.

“Hallucinating” paths

The decision where to collect the next sample relies on the system’s ability to “hallucinate” all possible future action from its current location. To do so, it leverages a modified version of Monte Carlo Tree Search (MCTS), a path-planning technique popularized for powering artificial-intelligence systems that master complex games, such as Go and Chess.

MCTS uses a decision tree — a map of connected nodes and lines — to simulate a path, or sequence of moves, needed to reach a final winning action. But in games, the space for possible paths is finite. In unknown environments, with real-time changing dynamics, the space is effectively infinite, making planning extremely difficult. The researchers designed “continuous-observation MCTS,” which leverages the Gaussian process and the novel objective function to search over this unwieldy space of possible real paths.

The root of this MCTS decision tree starts with a “belief” node, which is the next immediate step the robot can take. This node contains the entire history of the robot’s actions and observations up until that point. Then, the system expands the tree from the root into new lines and nodes, looking over several steps of future actions that lead to explored and unexplored areas.

Then, the system simulates what would happen if it took a sample from each of those newly generated nodes, based on some patterns it has learned from previous observations. Depending on the value of the final simulated node, the entire path receives a reward score, with higher values equaling more promising actions. Reward scores from all paths are rolled back to the root node. The robot selects the highest-scoring path, takes a step, and collects a real sample. Then, it uses the real data to update its Gaussian process model and repeats the “hallucination” process.

“As long as the system continues to hallucinate that there may be a higher value in unseen parts of the world, it must keep exploring,” Flaspohler says. “When it finally converges on a spot it estimates to be the maximum, because it can’t hallucinate a higher value along the path, it then stops exploring.”

Now, the researchers are collaborating with scientists at WHOI to use PLUMES-powered robots to localize chemical plumes at volcanic sites and study methane releases in melting coastal estuaries in the Arctic. Scientists are interested in the source of chemical gases released into the atmosphere, but these test sites can span hundreds of square miles.

“They can [use PLUMES to] spend less time exploring that huge area and really concentrate on collecting scientifically valuable samples,” Preston says.



from MIT News - Oceanography and ocean engineering https://ift.tt/2qjL1ba

jueves, 10 de octubre de 2019

MIT alumna addresses the world’s mounting plastic waste problem

It’s been nearly 10 years since Priyanka Bakaya MBA ’11 founded Renewlogy to develop a system that converts plastic waste into fuel. Today, that system is being used to profitably turn even nonrecyclable plastic into high-value fuels like diesel, as well as the precursors to new plastics.

Since its inception, Bakaya has guided Renewlogy through multiple business and product transformations to maximize its impact. During the company’s evolution from a garage-based startup to a global driver of sustainability, it has licensed its technology to waste management companies in the U.S. and Canada, created community-driven supply chains for processing nonrecycled plastic, and started a nonprofit, Renew Oceans, to reduce the flow of plastic into the world’s oceans.

The latter project has brought Bakaya and her team to one of the most polluted rivers in the world, the Ganges. With an effort based in Varanasi, a city of much religious, political, and cultural significance in India, Renew Oceans hopes to transform the river basin by incentivizing residents to dispose of omnipresent plastic waste in its “reverse vending machines,” which provide coupons in exchange for certain plastics.

Each of Renewlogy’s initiatives has brought challenges Bakaya never could have imagined during her early days tinkering with the system. But she’s approached those hurdles with a creative determination, driven by her belief in the transformative power of the company.

“It’s important to focus on big problems you’re really passionate about,” Bakaya says. “The only reason we’ve stuck with it over the years is because it’s extremely meaningful, and I couldn’t imagine working this hard and long on something if it wasn’t deeply meaningful.”

A system for sustainability

Bakaya began working on a plastic-conversion system with Renewlogy co-founder and Chief Technology Officer Benjamin Coates after coming to MIT’s Sloan School of Management in 2009. While pursuing his PhD at the University of Utah, Coates had been developing continuously operating systems to create fuels from things like wood waste and algae conversion.

One of Renewlogy’s key innovations is using a continuous system on plastics, which saves energy by eliminating the need to reheat the system to the high temperatures necessary for conversion.

Today, plastics entering Renewlogy’s system are first shredded, then put through a chemical reformer, where a catalyst degrades their long carbon chains.

Roughly 15 to 20 percent of those chains are converted into hydrocarbon gas that Renewlogy recycles to heat the system. Five percent turns into char, and the remaining 75 percent is converted into high-value fuels. Bakaya says the system can create about 60 barrels of fuel for every 10 tons of plastic it processes, and it has a 75 percent lower carbon footprint when compared to traditional methods for extracting and distilling diesel fuel.

In 2014, the company began running a large-scale plant in Salt Lake City, where it continues to iterate its processes and hold demonstrations.

Since then, Renewlogy has set up another commercial-scale facility in Nova Scotia, Canada, where the waste management company Sustane uses it to process about 10 tons of plastic a day, representing 5 percent of the total amount of solid waste the company collects. Renewlogy is also building a similar-sized facility in Phoenix, Arizona, that will be breaking ground next year. That project focuses on processing specific types of plastics (identified by international resin codes 3 through 7) that are less easily recycled.

In addition to its licensing strategy, the company is spearheading grassroots efforts to gather and process plastic that’s not normally collected for recycling, as part of the Hefty Energy Bag Program.

Through the program, residents in cities including Boise, Idaho, Omaha, Nebraska, and Lincoln, Nebraska, can put plastics numbered 4 through 6 into their regular recycling bins using special orange bags. The bags are separated at the recycling facility and sent to Renewlogy’s Salt Lake City plant for processing.

The projects have positioned Renewlogy to continue scaling and have earned Bakaya entrepreneurial honors from the likes of Forbes, Fortune, and the World Economic Forum. But a growing crisis in the world’s oceans has drawn her halfway across the world, to the site of the company’s most ambitious project yet.

Renewing the planet’s oceans

Of the millions of tons of plastic waste flowing through rivers into the world’s oceans each year, roughly 90 percent comes from just 10 rivers. The worsening environmental conditions of these rivers represents a growing global crisis that state governments have put billions of dollars toward, often with discouraging results.

Bakaya believes she can help.

“Most of these plastics tend to be what are referred to as soft plastics, which are typically much more challenging to recycle, but are a good feedstock for Renewlogy’s process,” she says.

Bakaya started Renew Oceans as a separate, nonprofit arm of Renewlogy last year. Since then, Renew Oceans has designed fence-like structures to collect river waste that can then be brought to its scaled down machines for processing. These machines can process between 0.1 and 1 ton of plastic a day.

Renew Oceans has already built its first machine, and Bakaya says deciding where to put it was easy.

From its origins in the Himalayas, the Ganges River flows over 1,500 miles through India and Bangladesh, serving as a means of transportation, irrigation, energy, and as a sacred monument to millions of people who refer to it as Mother Ganges.

Renewlogy’s first machine is currently undergoing local commissioning in the Indian city of Varanasi. Bakaya says the project is designed to scale.

“The aim is to take this to other major polluted rivers where we can have maximum impact,” Bakaya says. “We’ve started with the Ganges, but we want to go to other regions, especially around Asia, and find circular economies that can support this in the long term so locals can derive value from these plastics.”

Scaling down their system was another unforeseen project for Bakaya and Coates, who remember scaling up prototypes during the early days of the company. Throughout the years, Renewlogy has also adjusted its chemical processes in response to changing markets, having begun by producing crude oil, then moving to diesel as oil prices plummeted, and now exploring ways to create high-value petrochemicals like naphtha, which can be used to make new plastics.

Indeed, the company’s approach has featured almost as many twists and turns as the Ganges itself. Bakaya says she wouldn’t have it any other way.

“I’d really encourage entrepreneurs to not just go down that easy road but to really challenge themselves and try to solve big problems — especially students from MIT. The world is kind of depending on MIT students to push us forward and challenge the realm of possibility. We all should feel that sense of responsibility to solve bigger problems.”



from MIT News - Oceanography and ocean engineering https://ift.tt/2M0402I

martes, 20 de agosto de 2019

A battery-free sensor for underwater exploration

To investigate the vastly unexplored oceans covering most our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface — an underwater “internet of things.” But how to supply constant power to scores of sensors designed to stay for long durations in the ocean’s deep?

MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods — and even sample waters on distant planets. They are presenting the system at the SIGCOMM conference this week, in a paper that has won the conference’s “best paper” award.

The system makes use of two key phenomena. One, called the “piezoelectric effect,” occurs when vibrations in certain materials generate an electrical charge. The other is “backscatter,” a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.

In the researchers’ system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver — or it doesn’t reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.

“Once you have a way to transmit 1s and 0s, you can send any information,” says co-author Fadel Adib, an assistant professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science and founding director of the Signal Kinetics Research Group. “Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting.”

The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobytes per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.

Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn’s largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.

“How can you put a sensor under the water on Titan that lasts for long periods of time in a place that’s difficult to get energy?” says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. “Sensors that communicate without a battery open up possibilities for sensing in extreme environments.”

Preventing deformation

Inspiration for the system hit while Adib was watching “Blue Planet,” a nature documentary series exploring various aspects of sea life. Oceans cover about 72 percent of Earth’s surface. “It occurred to me how little we know of the ocean and how marine animals evolve and procreate,” he says. Internet-of-things (IoT) devices could aid that research, “but underwater you can’t use Wi-Fi or Bluetooth signals … and you don’t want to put batteries all over the ocean, because that raises issues with pollution.”

That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They’re often used to detect sunken vessels, fish, and other underwater objects.

“That reversibility is what allows us to develop a very powerful underwater backscatter communication technology,” Adib says.

Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.

Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.

However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material’s structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.

Long-term deep-sea sensing

The transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.

“When you’re tracking a marine animal, for instance, you want to track it over a long range and want to keep the sensor on them for a long period of time. You don’t want to worry about the battery running out,” Adib says. “Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places.”

Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice, and could aid in studying melting ice and marine life interaction with the pools. “We could sense what’s happening down there, without needing to keep hauling sensors up when their batteries die,” Adib says.

Polly Huang, a professor of electrical engineering at Taiwan National University, praised the work for its technical novelty and potential impact on environmental science. “This is a cool idea,” Huang says. “It's not news one uses piezoelectric crystals to harvest energy … [but is the] first time to see it being used as a radio at the same time [which] is unheard of to the sensor network/system research community. Also interesting and unique is the hardware design and fabrication. The circuit and the design of the encapsulation are both sound and interesting.”

While noting that the system still needs more experimentation, especially in sea water, Huang adds that “this might be the ultimate solution for researchers in marine biography, oceanography, or even meteorology — those in need of long-term, low-human-effort underwater sensing.”

Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They’re also hoping to test if the system can transmit sound and low-resolution images.

The work is sponsored, in part, by the U.S Office of Naval Research.



from MIT News - Oceanography and ocean engineering https://ift.tt/30kHd6z

miércoles, 24 de julio de 2019

Tuna are spawning in marine protected areas

Marine protected areas are large swaths of coastal seas or open ocean that are protected by governments from activities such as commercial fishing and mining. Such marine sanctuaries have had rehabilitating effects on at-risk species living within their borders. But it’s been less clear how they benefit highly migratory species such as tuna.

Now researchers at MIT and the Woods Hole Oceanographic Institution have found evidence that tuna are spawning in the Phoenix Islands Protected Area (PIPA), one of the largest marine protected areas in the world, covering an area of the central Pacific as large as Argentina.

The researchers observed multiple species of tuna larvae throughout this protected expanse, suggesting that several migratory species are using these protected waters as a reproductive stopover, over several consecutive years, and even during a particularly strong El Niño season, where PIPA may have provided a critical refuge.

The results, published this week in the journal Scientific Reports, suggest that marine protected areas may be ocean oases for migratory fish, with plentiful nutrients and clean, clear waters that encourage tuna and other migratory species to linger, and spawn often. The study supports the notion that marine protected areas can provide protection to adult fish during spawning, and in this way, help to bolster fish populations — particularly those that, outside protected areas, are in danger of overfishing.

“We have proven that tuna are spawning in this protected area, and that it’s worth protecting,” says Christina Hernández, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “There are various types of protection for marine areas around the world, and all those measures allow us to preserve populations better, and in some cases protect highly migratory species.”

Sea change in conservation

The Phoenix Islands Protected Area is part of the territorial waters of the Republic of Kiribati (pronounced Keer-ee-bahs), a sovereign state in Micronesia made up of three island chains in the central Pacific. The islands, if stitched together, would amount to no more than the land area of Cape Cod. However, Kiribati’s ocean territory is vast, extending 200 nautical miles from each of its 32 atolls. The people of Kiribati rely heavily on revenue from tuna licenses that they mete out to commercial fishers. In 2008, however, the republic designated 11 percent of its waters as a mixed-use marine protected area, with limited fishing. Officials ultimately banned all fishing activities in the region starting in 2015, in a conservation effort that — among other things — protected many endangered species, such as giant clams and coconut crab, along with birds, mammals, and sea turtles living within its boundaries.

While fishing vessels have respected the protected territory, keeping their activities outside PIPA’s boundaries, legal fishing efforts surrounding PIPA caused the researchers to wonder whether PIPA might eventually provide an economic gain in the form of “spillover effects.” In other words, if an ecological region is preserved over long periods of time, it might produce more fish that, once full-grown, might cross the territory’s boundaries, benefiting both Kiribati and the regional fishing community.

Hernández’ colleague, Randi Rotjan of Boston University, had been working with the Republic of Kiribati on ways to scientifically monitor PIPA, and wanted to assess whether the protected region might also serve as protected spawning grounds for migratory tuna.

In 2014, the team began yearly expeditions to the central Pacific, to sample within PIPA for tuna larvae, fish younger than 4 weeks old, that would suggest recent spawning activity in the region. The researchers embarked on a 140-foot-long student sailing vessel, owned and operated by Sea Education Association, which also collaborated on this study. Sailing from Hawaii, the ship reached the edges of PIPA after about a 10-day journey. Once within the protected area, the team began sampling the waters for tiny fish, using three different nets, each designed to collect at 100 meters, 50 meters, and skimming the surface.

The team pulled up nets teeming with ocean plankton, including tuna larvae, along with tiny crustaceans, jellyfish, pelagic worms, and anchovies, all of which they preserved and transported back to Massachusetts, where they carried out analyses to extract and identify the number and type of tuna larvae amid the rest of the catch.

From 2015 to 2017, the three years included in the current paper, the researchers analyzed samples from over 175 net tows, and identified more than 600 tuna larvae, covering a distance within PIPA of more than 650 nautical miles, or 1,200 kilometers. Compared with a handful of previous studies on tuna larvae populations, Hernández says the number and density of larvae they found is “pretty on track for what we expect for this part of the Pacific.”

“Larval populations can’t really control how they move, and they get mixed around by ocean currents and dispersed away from each other,” Hernández explains. “As they continue to grow, they start to school and are in denser aggregations. But as larvae, they live at low densities.”

The tuna larvae appeared in about similar abundances over all three years, and even in 2015, when a strong El Niño season dramatically altered ocean conditions.

“That’s something that’s relatively good news, that the protected area seems to be pretty good habitat across environmental conditions,” Hernández says.

The team identified tuna larvae in their samples as species of skipjack, big-eye, and yellowfin.

“These particular fish are not so picky about where they spawn, and they can spawn every two to three days, for a couple of months,” Hernández says. “If they’re thinking the food is pretty good in PIPA, they may stay inside its boundaries for a few weeks, and might have additional spawning events that they wouldn’t have if they were outside the protected area, where they could get caught before they spawn.”

The results are the first evidence that highly migratory species spawn in marine protected areas. But whether such regions encourage species to reproduce more than in other, unprotected waters will require studies over a longer period of time.

“We have to protect these areas long enough to figure out if they are causing an increase in tuna populations,” Hernández says. “The amount of information we have about the Pacific tuna is paltry. And it’s critically important that we study the early life stages of fishes, and that we monitor protected areas, and populations of tuna, as the ocean changes.”

This work was supported in part by the PIPA Trust, Sea Education Association, the Prince Albert of Monaco Foundation II, New England Aquarium, and Boston University.



from MIT News - Oceanography and ocean engineering https://ift.tt/2JZx6wV

martes, 25 de junio de 2019

Projects advance naval ship design and capabilities

For the past 20 years, officials from the U.S. Navy and leaders in the shipbuilding industry have convened on MIT’s campus each spring for the MIT Ship Design and Technology Symposium. The daylong event is a platform to update industry and military leaders on the latest groundbreaking research in naval construction and engineering being conducted at MIT.

The main event of the symposium was the design project presentations given by Course 2N (Naval Construction and Engineering) graduate students. These projects serve as a capstone of their three-year curriculum.

This year, recent graduate Andrew Freeman MEng '19, SM '19, who was advised by Dick K. P. Yue, the Philip J. Solondz Professor of Engineering, and William Taft MEng '19, SM '19, who works with James Kirtley, professor of electrical engineering and computer science, presented their current research. Rear Admiral Ronald A. Boxall, director of surface warfare at the U.S. Navy, served as keynote speaker at the event, which took place in May.

“The Ship Design and Technology Symposium gives students in the 2N program the opportunity to present ship and submarine design and conversions, as well as thesis research, to the leaders of the U.S. Navy and design teams from industry,” explains Joe Harbour, professor of the practice of naval construction at MIT. “Through the formal presentation and poster sessions, the naval and industrial leaders can better understand opportunities to improve designs and design processes.”

Since 1901, the Course 2N program has been educating active-duty officers in the Navy and U.S. Coast Guard, in addition to foreign naval officers. This year, eight groups of 2N students presented design or conversion project briefs to an audience of experts in the Samberg Conference Center.

The following three projects exemplify the ways in which these students are adapting existing naval designs and creating novel designs that can help increase the capabilities and efficiency of naval vessels.

The next generation of hospital ships

The Navy has a fleet of hospital ships ready for any major combat situations that might arise. These floating hospitals allow doctors to care for large numbers of casualties, perform operations, stabilize patients, and help transfer patients to other medical facilities.

Lately, these ships have been instrumental in response efforts during major disasters — such as the recent hurricanes in the Caribbean. The ships also provide an opportunity for doctors to train local medical professionals in developing countries.

The Navy's current fleet of hospital ships is aging. Designed in the 1980s, these ships require an update to complement the way naval operations are conducted in modern times. As such, the U.S. Navy is looking to launch the next fleet of hospital ships in 2035.

A team of Course 2N students including Aaron Sponseller, Travis Rapp, and Robert Carelli was tasked with assessing current hospital ship designs and proposing a design for the next generation of hospital ships.

“We looked at several different hull form sizes that could achieve the goals of our sponsors, and assigned scores to rank their attributes and determine which one could best achieve their intended mission,” explains Carelli.

In addition to visiting the USS Mercy, a hospital ship that was commissioned during World War II, the team toured nearby Tufts Medical Center to get a sense of what a state-of-the-art medical facility looked like. One thing that immediately struck the team was how different the electrical needs of a modern-day medical facility are from the needs nearly 40 years ago, when the medical ships were first being designed.

“Part of the problem with the current ships is they scaled their electrical capacity with older equipment from the 1980s in mind,” adds Rapp. This capacity doesn’t account for the increased electrical burden of digital CT scans, high-tech medical devices, and communication suites.

The current ships have a separate propulsion plant and electrical generation plant. The team found that combining the two would increase the ship’s electrical capacity, especially while "on station" — a term used when a ship maintains its position in the water.

“These ships spend a lot of time on station while doctors operate on patients,” explains Carelli. “By using the same system for propelling and electrical generation, you have a lot more capacity for these medical operations when it’s on station and for speed when the ship is moving.”

The team also recommended that the ship be downsized and tailored to treat intensive care cases rather than having such large stable patient areas. “We trimmed the fat, so to speak, and are moving the ship toward what really delivers value — intensive care capability for combat operations,” says Rapp.

The team hopes their project will inform the decisions the Navy makes when they do replace large hospital ships in 2035. “The Navy goes through multiple iterations of defining how they want their next ship to be designed and we are one small step in that process,” adds Sponseller.

Autonomous fishing vessels

Over the past few decades, advances in artificial intelligence and sensory hardware have led to increasingly sophisticated unmanned vehicles in the water. Sleek autonomous underwater vehicles operate below the water’s surface. Rather than work on these complex and often expensive machines, Course 2N students Jason Barker, David Baxter, and Brian Stanfield assessed the possibility of using something far more commonplace for their design project: fishing vessels.

“We were charged with looking at the possibility of going into a port, acquiring a low-end vessel like a fishing boat, and making that boat an autonomous machine for various missions,” explains Barker.

With such a broad scope, Barker and his teammates set some parameters to guide their research. They honed in on one fishing boat in particular: a 44 four-drum seiner.

The next step was determining how such a vessel could be outfitted with sensors to carry out a range of missions including measuring marine life, monitoring marine traffic in a given area, carrying out intelligence, surveillance and reconnaissance (ISR) missions, and, perhaps most importantly, conducting search and rescue operations.

The team estimated that the cost of transforming an everyday fishing boat into an autonomous vehicle would be roughly $2 million — substantially lower than building a new autonomous vehicle. The relatively low cost could make this an appealing exercise in areas where piracy is a potential concern. “Because the price of entry is so low, it’s not as risky as using a capital asset in these areas,” Barker explains.

The low price could also lead to a number of such autonomous vehicles in a given area. “You could put out a lot of these vessels,” adds Barker. “With the advances of swarm technologies you could create a network or grid of autonomous boats.”

Increasing endurance and efficiency in Freedom-class ships

For Course 2N student Charles Hasenbank, working on a conversion project for the engineering plant of Freedom-class ships was a natural fit. As a lieutenant in the U.S. Navy, Hasenbank served on the USS Freedom.

Freedom-class ships can reach upwards of 40 knots, 10 knots faster than most combat ships. “To get those extra knots requires a substantial amount of power,” explains Hasenbank. This power is generated by two diesel engines and two gas turbines that are also used to power large aircraft like the Dreamliner.

For their new frigate program, the Navy is looking to achieve a maximum speed of 30 knots, making the extra power provided by these engines unnecessary. The endurance range of these new frigates, however, would be higher than what the current Freedom-class ships allow. As such, Hasenbank and his fellow students Tikhon Ruggles and Cody White were tasked with exploring alternate forms of propulsion.

The team had five driving criteria in determining how to best convert the ships’ power system — minimize weight changes, increase efficiency, maintain or decrease acquisition costs, increase simplicity, and improve fleet commonality.

“The current design is a very capable platform, but the efficiencies aren’t there because speed was a driving factor,” explains Hasenbank.

When redesigning the engineering plant, the team landed on the use of four propellers, which would maintain the amount of draft currently experienced by these ships. To accommodate this change, the structure of the stern would need to be altered.

By removing a step currently in the stern design, the team made an unexpected discovery. Above 12 knots, their stern design would decrease hull resistance. “Something we didn’t initially expect was we improved efficiency and gained endurance through decreasing the hull resistance,” adds Hasenbank. “That was a nice surprise along the way.”

The team’s new design would be able to meet the 30 knot speed requirement of the new frigate program and it would add anywhere between 500 and 1,000 nautical miles of endurance to the ship.

Along with the other design projects presented at the MIT Ship Design and Technology Symposium, the work conducted by Hasenbank and his team could inform important decisions the U.S. Navy has to make in the coming years as it looks to update and modernize its fleet.



from MIT News - Oceanography and ocean engineering http://bit.ly/2NcUpZf

lunes, 10 de junio de 2019

Solving equations to design safer ships

David Larson ’16, SM ’18 spends much of his time thinking about boats. He has been a competitive sailor since high school. In his free time, he designs and tinkers with boats and is a member of the MIT Nautical Association Executive Committee. As a PhD student in MIT’s Laboratory for Ship and Platform Flows, he works on modeling ship-wave interactions to understand how ships behave in severe storms.

“I think I got into design and engineering through the sailing route,” says Larson. “I wanted to understand the physics of what was happening when I was out on the water.”

It was sailing that first drew Larson, who grew up near the water in San Diego, California, to MIT. On a trip as a first-year in high school, he stayed at a hotel on Memorial Drive and watched sail boats dart along the Charles River. Four years later, he enrolled at MIT.

Initially intent on studying physics, Larson quickly determined that he was most interested in mechanical engineering and ocean engineering classes. As a sophomore, he took class 2.016 (Hydrodynamics), taught by Paul Sclavounos, professor of mechanical engineering and naval architecture. The class would end up shaping the rest of his academic career.

On the second day of teaching 2.016, Sclavounos told students about his experiences designing for the America’s Cup. Larson knew some of the sailors with whom Sclavounos had worked. The two struck up a conversation after class, marking the beginning of their collaboration.

“Professor Sclavounos was the most influential in encouraging me to continue studying ocean engineering and naval architecture,” recalls Larson. Sclavounos recognized Larson’s talent and passion, often taking time after class to explain theories that Larson hadn’t yet learned.

“He was by far the best student in the class and was eagerly sought after by other students to help them through the course,” adds Sclavounos. “It was immediately evident to me that he possessed an intelligence and maturity unusual for his age.”

After graduating with his bachelor’s degree in 2016, Larson enrolled in MIT’s graduate program for mechanical engineering and ocean engineering. The summer between his undergraduate and graduate studies, he went back to his native California for an internship with Morrelli and Melvin Design and Engineering.

As an intern, Larson got to apply the concepts he learned as an undergrad — like controls, geometry optimization, and fluid mechanics — to real-world ship design. “That experience gave me a lot of practical insight into what the actual ship design process entails,” says Larson.

Back at MIT, Larson has spent his graduate studies working with Sclavounos on developing stochastic models for how ships interact with waves. While his work seems at times theoretical and abstract, it is grounded in a very practical problem: keeping ships safe in extreme weather.

“What I’m doing is motivated by practical ship design and manufacturing,” explains Larson. “I’m working to create a framework that gets more accurate predictions for how ships behave in severe storms, and to get those predictions fast enough to use in iterative design.”

Current models have come a long way in enhancing our ability to predict how waves move in the ocean. But many existing models that predict how ships move in waves, while extremely powerful, are constrained to one or two degrees of freedom, or often used over-simplified hull geometries. Larson hopes to take those models to the next level.

“The key components of our method are that we can take any realistic ship geometry directly from a CAD program, put that geometry through our model that treats the full six degrees of freedom, and get predictions for how these ships will behave in waves,” explains Larson.

Understanding how these ships behave in rough water could have immediate industrial applications. In addition to helping sailors find the safest route for their vessels, the predictions could be used to someday facilitate interactive ship design.

“My long-term goal is to eventually create an interface that can be used by design and manufacturing engineers for iterative design and optimization of the next generation of ships,” says Larson.

When Larson needs a break from mathematical equations and modeling, he uses CAD to design boats. “My research is quite mathematical, so designing boats is my outlet for reconnecting with the experimental and practical work I loved doing as an undergrad,” he adds.

Whether it’s designing boats in his spare time, competitive sailing, umpiring collegiate races across New England, helping the MIT Sailing Pavilion design its next fleet of dinghies, or developing a model to predict how ships behave in choppy seas — Larson will continue to pursue the passion for sailing he developed in childhood.



from MIT News - Oceanography and ocean engineering http://bit.ly/2WXwGzU

martes, 14 de mayo de 2019

Tropical Pacific is major player in global ocean heat transport

Far from the vast, fixed bodies of water oceanographers thought they were a century ago, oceans today are known to be interconnected, highly influential agents in Earth’s climate system.

A major turning point in our understanding of ocean circulation came in the early 1980s, when research began to indicate that water flowed between remote regions, a concept later termed the “great ocean conveyor belt.”

The theory holds that warm, shallow water from the South Pacific flows to the Indian and Atlantic oceans, where, upon encountering frigid Arctic water, it cools and sinks to great depth. This cold water then cycles back to the Pacific, where it reheats and rises to the surface, beginning the cycle again.

This migration of water has long been thought to play a vital role in circulating warm water, and thus heat, around the globe. Without it, estimates put the average winter temperatures in Europe several degrees cooler.

However, recent research indicates that these global-scale seawater pathways may play less of a role in Earth’s heat budget than traditionally thought. Instead, one region may be doing most of the heavy lifting.

paper published in April in Nature Geoscience by Gael Forget, a research scientist in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and a member of the Program in Atmospheres, Oceans,and Climate, and David Ferreira, an associate professor in the Department of Meteorology at the University of Reading (and former EAPS postdoc), found that global ocean heat transport is dominated by heat export from the tropical Pacific.

Using a state-of-the-art ocean circulation model with nearly complete global ocean data sets, the researchers demonstrated the overwhelming predominance of the tropical Pacific in distributing heat across the globe, from the equator to the poles. In particular, they found the region exports four times as much heat as is imported in the Atlantic and Arctic.

“We are not questioning the fact that there is a lot of water going from one basin into another,” says Forget. “What we're saying is, the net effect of these flows on heat transport is relatively small. This result indicates that the global conveyor belt may not be the most useful framework in which to understand global ocean heat transport.”

Updating ECCO

The study was performed using a modernized version of a global ocean circulation model called Estimating the Circulation and Climate of the Ocean (ECCO). ECCO is the brain child of Carl Wunsch, EAPS professor emeritus of physical oceanography, who envisioned its massive undertaking in the 1980s.

Today, ECCO is often considered the best record of ocean circulation to date. Recently, Forget has spearheaded extensive updates to ECCO, resulting in its fourth generation, which has since been adopted by NASA.

One of the major updates made under Forget’s leadership was the addition of the Arctic Ocean. Previous versions omitted the area due to a grid design that squeezed resolution at the poles. In the new version, however, the grid mimics the pattern of a volleyball, with six equally distributed grid areas covering the globe.

Forget and his collaborators also added in new data sets (on things like sea ice and geothermal heat fluxes) and refined the treatment of others. To do so, they took advantage of the advent of worldwide data collection efforts, like ARGO, which for 15 years has been deploying autonomous profiling floats across the globe to collect ocean temperature and salinity profiles.

“These are good examples of the kind of data sets that we need to inform this problem on a global scale,” say Forget. “They're also the kind of data sets that have allowed us to constrain crucial model parameters.”

Parameters, which represent events that occur on too small of a scale to be included in a model’s finite resolution, play an important role in how realistic the model’s results are (in other words, how closely its findings match up with what we see in the real world). One of many updates Forget made to ECOO involved the ability to adjust (within the model) parameters that represent mixing of the ocean on the small scale and mesoscale.

“By allowing the estimation system to adjust those parameters, we improved the fit to the data significantly,” says Forget.

The balancing act

With a new and improved foundational framework, Forget and Ferreira then sought to resolve another contentious issue: how to best measure and interpret ocean heat transport.

Ocean heat transport is calculated as both the product of seawater temperature and velocity and the exchange of heat between the ocean and the atmosphere. How to balance these events — the exchange of heat from the “source to sink” — requires sussing out which factors matter the most, and where.

Forget and Ferreira’s is the first framework that reconciles both the atmospheric and oceanic perspectives. Combining satellite data, which captures the intersection of the air and sea surface, with field data on what’s happening below the surface, the researchers created a three-dimensional representation of how heat transfers between the air, sea surface, and ocean columns.

Their results revealed a new perspective on ocean heat transport: that net ocean heat redistribution takes place primarily within oceanic basins rather than via the global seawater pathways that compose the great conveyor belt.

When the researchers removed internal ocean heat loops from the equation, they found that heat redistribution within the Pacific was the largest source of heat exchange. The region, they found, dominates the transfer of heat from the equator to the poles in both hemispheres.  

“We think this is a really important finding,” says Forget. “It clarifies a lot of things and, hopefully, puts us, as a community, on stronger footing in terms of better understanding ocean heat transport.”

Future implications

The findings have profound implications on how scientists may observe and monitor the ocean going forwardsays Forget.

The community that deals with ocean heat transport, on the ocean side, tends to focus a lot on the notion that there is a region of loss, and maybe overlooks a little bit how important the region of gain may be,” says Forget.

In practice, this has meant a focus on the North Atlantic and Arctic oceans, where heat is lost, and less focus on the tropical Pacific, where the ocean gains heat. These viewpoints often dictate priorities for funding and observational strategies, including where instruments are deployed.  

“Sometimes it’s a balance between putting a lot of measurements in one specific place, which can cost a lot of money, versus having a program that's really trying to cover a global effort,” says Forget. “Those two things sometimes compete with each other.”

In the article, Forget and Ferreira make the case that sustained observation of the global ocean as whole, not just at a few locations and gates separating ocean basins, is crucial to monitor and understand ocean heat transport.

Forget also acknowledges that the findings go against some established schools of thought, and is eager to continue research in the area and hear different perspectives.

“We are expecting to stimulate some debate, and I think it's going to be exciting to see,” says Forget. “If there is pushback, all the better.”



from MIT News - Oceanography and ocean engineering http://bit.ly/2Hj3YAy

lunes, 6 de mayo de 2019

North Atlantic Ocean productivity has dropped 10 percent during Industrial era

Virtually all marine life depends on the productivity of phytoplankton — microscopic organisms that work tirelessly at the ocean’s surface to absorb the carbon dioxide that gets dissolved into the upper ocean from the atmosphere.

Through photosynthesis, these microbes break down carbon dioxide into oxygen, some of which ultimately gets released back to the atmosphere, and organic carbon, which they store until they themselves are consumed. This plankton-derived carbon fuels the rest of the marine food web, from the tiniest shrimp to giant sea turtles and humpback whales.

Now, scientists at MIT, Woods Hole Oceanographic Institution (WHOI), and elsewhere have found evidence that phytoplankton’s productivity is declining steadily in the North Atlantic, one of the world’s most productive marine basins.

In a paper appearing today in Nature, the researchers report that phytoplankton’s productivity in this important region has gone down around 10 percent since the mid-19th century and the start of the Industrial era. This decline coincides with steadily rising surface temperatures over the same period of time.

Matthew Osman, the paper’s lead author and a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, says there are indications that phytoplankton’s productivity may decline further as temperatures continue to rise as a result of human-induced climate change.

“It’s a significant enough decine that we should be concerned,” Osman says. “The amount of productivity in the oceans roughly scales with how much phytoplankton you have. So this translates to 10 percent of the marine food base in this region that’s been lost over the industrial era. If we have a growing population but a decreasing food base, at some point we’re likely going to feel the effects of that decline.”

Drilling through “pancakes” of ice

Osman and his colleagues looked for trends in phytoplankton’s productivity using the molecular compound methanesulfonic acid, or MSA. When phytoplankton expand into large blooms, certain microbes emit dimethylsulfide, or DMS, an aerosol that is lofted into the atmosphere and eventually breaks down as either sulfate aerosol, or MSA, which is then deposited on sea or land surfaces by winds.

“Unlike sulfate, which can have many sources in the atmosphere, it was recognized about 30 years ago that MSA had a very unique aspect to it, which is that it’s only derived from DMS, which in turn is only derived from these phytoplankton blooms,” Osman says. “So any MSA you measure, you can be confident has only one unique source — phytoplankton.”

In the North Atlantic, phytoplankton likely produced MSA that was deposited to the north, including across Greenland. The researchers measured MSA in Greenland ice cores — in this case using 100- to 200-meter-long columns of snow and ice that represent layers of past snowfall events preserved over hundreds of years.

“They’re basically sedimentary layers of ice that have been stacked on top of each other over centuries, like pancakes,” Osman says.

The team analyzed 12 ice cores in all, each collected from a different location on the Greenland ice sheet by various groups from the 1980s to the present. Osman and his advisor, Sarah Das, an associate scientist at WHOI, collected one of the cores during an expedition in April 2015.

“The conditions can be really harsh,” Osman says. “It’s minus 30 degrees Celsius, windy, and there are often whiteout conditions in a snowstorm, where it’s difficult to differentiate the sky from the ice sheet itself.”

The team was nevertheless able to extract, meter by meter, a 100-meter-long core, using a giant drill that was delivered to the team’s location via a small ski-equipped airplane. They immediately archived each ice core segment in a heavily insulated cold storage box, then flew the boxes on “cold deck flights” — aircraft with ambient conditions of around minus 20 degrees Celsius. Once the planes touched down, freezer trucks transported the ice cores to the scientists’ ice core laboratories.

“The whole process of how one safely transports a 100-meter section of ice from Greenland, kept at minus-20-degree conditions,  back to the United States is a massive undertaking,” Osman says.

Cascading effects

The team incorporated the expertise of researchers at various labs around the world in analyzing each of the 12 ice cores for MSA. Across all 12 records, they observed a conspicuous decline in MSA concentrations, beginning in the mid-19th century, around the start of the Industrial era when the widescale production of greenhouse gases began. This decline in MSA is directly related to a decline in phytoplankton productivity in the North Atlantic.

“This is the first time we’ve collectively used these ice core MSA records from all across Greenland,  and they show this coherent signal. We see a long-term decline that originates around the same time as when we started perturbing the climate system with industrial-scale greenhouse-gas emissions,” Osman says. “The North Atlantic is such a productive area, and there’s a huge multinational fisheries economy related to this productivity. Any changes at the base of this food chain will have cascading effects that we’ll ultimately feel at our dinner tables.”

The multicentury decline in phytoplankton productivity appears to coincide not only with concurrent long-term warming temperatures; it also shows synchronous variations on decadal time-scales with the large-scale ocean circulation pattern known as the Atlantic Meridional Overturning Circulation, or AMOC. This circulation pattern typically acts to mix layers of the deep ocean with the surface, allowing the exchange of much-needed nutrients on which phytoplankton feed.

In recent years, scientists have found evidence that AMOC is weakening, a process that is still not well-understood but may be due in part to warming temperatures increasing the melting of Greenland’s ice. This ice melt has added an influx of less-dense freshwater to the North Atlantic, which acts to stratify, or separate its layers, much like oil and water, preventing nutrients in the deep from upwelling to the surface. This warming-induced weakening of the ocean circulation could be what is driving phytoplankton’s decline. As the atmosphere warms the upper ocean in general, this could also further the ocean’s stratification, worsening phytoplankton’s productivity.

“It’s a one-two punch,” Osman says. “It’s not good news, but the upshot to this is that we can no longer claim ignorance. We have evidence that this is happening, and that’s the first step you inherently have to take toward fixing the problem, however we do that.”

This research was supported in part by the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), as well as graduate fellowship support from the US Department of Defense Office of Naval Research.



from MIT News - Oceanography and ocean engineering http://bit.ly/2V1ItYF

viernes, 3 de mayo de 2019

Study demonstrates seagrass’ strong potential for curbing erosion

Most people’s experience with seagrass, if any, amounts to little more than a tickle on their ankles while wading in shallow coastal waters. But it turns out these ubiquitous plants, varieties of which exist around the world, could play a key role in protecting vulnerable shores as they face onslaughts from rising sea levels.

New research for the first time quantifies, through experiments and mathematical modelling, just how large and how dense a continuous meadow of seagrass must be to provide adequate damping of waves in a given geographic, climatic, and oceanographic setting.

In a pair of papers appearing in the May issues of two research journals, Coastal Engineering and the Journal of Fluids and Structures, MIT professor of civil and environmental engineering Heidi Nepf and doctoral student Jiarui Lei describe their findings and the significant environmental benefits seagrass offers. These include not only preventing beach erosion and protecting seawalls and other structures, but also improving water quality and sequestering carbon to help limit future climate change.

Those services, coupled with better-known services such as providing habitat for fish and food for other marine creatures, mean that submerged aquatic vegetation including seagrass provides an overall value of more than $4 trillion globally every year, as earlier studies have shown. Yet today, some important seagrass areas such as the Chesapeake Bay are down to about half of their historic seagrass coverage (having rebounded from a low of just 2 percent), thus limiting the availability of these valuable services.

Nepf and Lei recreated artificial versions of seagrass, assembled from materials of different stiffness to reproduce the long, flexible blades and much stiffer bases that are typical of seagrass plants such as Zostera marina, also known as common eelgrass. They set up a meadow-like collection of these artificial plants in a 79-foot-long (24-meter) wave tank in MIT’s Parsons Laboratory, which can mimic conditions of natural waves and currents. They subjected the meadow to a variety of conditions, including still water, strong currents, and wave-like sloshing back and forth. Their results validated predictions made earlier using a computerized model of individual plants.

Researchers used a74-foot-long wave tank at MIT, loaded with simulated seagrass plants, to study how seagrass acts to attenuate waves under various conditions. In this video, the simulated plants are exposed to strong waves.

In further tests in the MIT tank, simulated seagrass plants are subjected to very low-velocity waves.

The researchers used the physical and numerical models to analyze how the seagrass and waves interact under a variety of conditions of plant density, blade lengths, and water motions. The study describes how the motion of the plants changes with blade stiffness, wave period, and wave amplitude, providing a more precise prediction of wave damping over seagrass meadows. While other research has modeled some of these conditions, the new work more faithfully reproduces real-world conditions and provides a more realistic platform for testing ideas about seagrass restoration or ways of optimizing the beneficial effects of such submerged meadows, they say.

To test the validity of the model, the team then did a comparison of the predicted effects of seagrass on waves, looking at one specific seagrass meadow off the shore of the Spanish island of Mallorca, in the Mediterranean Sea, which is known to attenuate the force of incoming waves by a factor of about 50 percent on average. Using measurements of meadow morphology and wave velocities collected in a previous study led by Professor Eduardo Infantes, currently of Gothenburg University, Lei was able to confirm the predictions made by the model, which analyzed the way the tips of the grass blades and particles suspended in the water both tend to follow circular paths as waves go by, forming circles of motion known as orbitals.

The observations there matched the predictions very well, Lei says, showing the way wave strength and seagrass motion varied with distance from the edge of the meadow to its interior agreed with the model. So, “with this model the engineers and practitioners can assess different scenarios for seagrass restoration projects, which is a big deal right now,” he says That could make a significant difference, he says, because now some restoration projects are considered too expensive to undertake, whereas a better analysis could show that a smaller area, less expensive to restore, might be capable of providing the desired level of protection. In other areas, the analysis might show that a project is not worth doing at all, because the characteristics of the local waves or currents would limit the grasses’ effectiveness.

The particular seagrass meadow in Mallorca that they studied is known to be very dense and uniform, so one future project is to extend the comparison to other seagrass areas, including those that are patchier or less thickly vegetated, Nepf says, to demonstrate that the model can indeed be useful under a variety of conditions.

By attenuating the waves and thus providing protection against erosion, the seagrass can trap fine sediment on the seabed. This can significantly reduce or prevent runaway growth of algae fed by the nutrients associated with the fine sediment, which in turn causes a depletion of oxygen that can kill off much of the marine life, a process called eutrophication.

Seagrass also has significant potential for sequestering carbon, both through its own biomass and by filtering out fine organic material from the surrounding water, according to Nepf, and this is a focus of her and Lei’s ongoing research. An acre of seagrass can store about three times as much carbon as an acre of rainforest, and Lei says preliminary calculations suggest that globally, seagrass meadows are responsible for more than 10 percent of carbon buried in the ocean, even though they occupy just 0.2 percent of the area.

While other researchers have studied the effects of seagrass in steady currents, or in oscillating waves, “they are the first to combine these two types of flows, which are what real plants are typically subjected to. Despite the added complexity, they really sort out the physics and define different flow regimes with different behaviours,” says Frédérick Gosselin, professor of mechanical engineering at Polytechnique Montréal, in Canada, who was not connected to this research.

Gosselin adds, “This line of research is critical. Land developers are quick to fill and dredge wetlands without much thinking about the role these humid environments play.” This study “demonstrates how submerged vegetation has a precisely quantifiable effect on damping incoming waves. This means we can now evaluate exactly how much a meadow protects the coast from erosion. … This information would allow better decisions by our lawmakers.”

The work was funded by the U.S. National Science Foundation.



from MIT News - Oceanography and ocean engineering http://bit.ly/2J9qiOw

jueves, 25 de abril de 2019

Designing ocean ecological systems in the lab

Researchers from MIT have discovered simple rules of assembly of ocean microbiomes that degrade complex polysaccharides in coastal environments. Microbiomes, or microbial communities, are composed of hundreds or thousands of diverse species, making it a challenge to identify the principles that govern their structure and function.

The findings indicate that marine microbiomes can be simplified by grouping species into two types of functional modules. The first type contain polysaccharide specialists that produce the enzymes required to break down the complex sugars. The second type contains species that consume simple metabolic byproducts released by the specialist degraders and are therefore independent of the polysaccharide. This partitioning reveals a simple design for the microbiome: a trophic network in which energy is funneled from degraders to consumers.

“Our work reveals fundamental principles of microbial community assembly that can help us make sense of the vast diversity of microbes in the environment,” states Otto X. Cordero, principal investigator on the research and associate professor in the Department of Civil and Environmental Engineering (CEE). 

Cordero’s co-authors on the paper include CEE research affiliates Tim Enke and Manoshi S. Datta, CEE postdoc Julia Schwartzman, and Computational and Systems Biology Program research affiliate Nathan Cermak, as well as researchers from science and technology university ETH Zurich in Switzerland.

The simple trophic organization revealed by this study allowed Cordero and colleagues to predict microbiome species composition based on the profile of energy resources available to the community. 

“The significance of these discoveries is that we have identified simple rules of assembly, which allows us to predict community composition and rationally design ecological systems in the lab,” emphasizes Cordero. 

In order to investigate the modular organization of the microbial communities, the researchers conducted fieldwork with synthetic marine particles made of polysaccharides that are abundant in marine environments, such as chitin, alginate, agarose and carrageenan, as well as combinations of these substrates.

The team immersed the microscopic particles in natural samples of seawater and studied the colonization dynamics of bacteria using genome sequencing. This analysis allowed the researchers to disentangle the effect of polysaccharide composition on microbiome assembly.

“A promising application of this work is to apply these principles in order to design synthetic communities that degrade complex biological materials, such as those found in agricultural waste and animal feed,” says Cordero. 



from MIT News - Oceanography and ocean engineering http://bit.ly/2UTD6j2