• Skip to main content

Biz Builder Mike

You can't sail Today's boat on Yesterdays wind - Michael Noel

  • Tokenomics is not Economics – Digital CX -The Digital Transformation
  • Resume / CV – Michael Noel
  • Contact Us
  • Featured

AI

Mar 23 2023

Visual language maps for robot navigation

Posted by Oier Mees, PhD Student, University of Freiburg, and Andy Zeng, Research Scientist, Robotics at Google

People are excellent navigators of the physical world, due in part to their remarkable ability to build cognitive maps that form the basis of spatial memory — from localizing landmarks at varying ontological levels (like a book on a shelf in the living room) to determining whether a layout permits navigation from point A to point B. Building robots that are proficient at navigation requires an interconnected understanding of (a) vision and natural language (to associate landmarks or follow instructions), and (b) spatial reasoning (to connect a map representing an environment to the true spatial distribution of objects). While there have been many recent advances in training joint visual-language models on Internet-scale data, figuring out how to best connect them to a spatial representation of the physical world that can be used by robots remains an open research question.

To explore this, we collaborated with researchers at the University of Freiburg and Nuremberg to develop Visual Language Maps (VLMaps), a map representation that directly fuses pre-trained visual-language embeddings into a 3D reconstruction of the environment. VLMaps, which is set to appear at ICRA 2023, is a simple approach that allows robots to (1) index visual landmarks in the map using natural language descriptions, (2) employ Code as Policies to navigate to spatial goals, such as “go in between the sofa and TV” or “move three meters to the right of the chair”, and (3) generate open-vocabulary obstacle maps — allowing multiple robots with different morphologies (mobile manipulators vs. drones, for example) to use the same VLMap for path planning. VLMaps can be used out-of-the-box without additional labeled data or model fine-tuning, and outperforms other zero-shot methods by over 17% on challenging object-goal and spatial-goal navigation tasks in Habitat and Matterport3D. We are also releasing the code used for our experiments along with an interactive simulated robot demo.


VLMaps can be built by fusing pre-trained visual-language embeddings into a 3D reconstruction of the environment. At runtime, a robot can query the VLMap to locate visual landmarks given natural language descriptions, or to build open-vocabulary obstacle maps for path planning.

Classic 3D maps with a modern multimodal twist

VLMaps combines the geometric structure of classic 3D reconstructions with the expression of modern visual-language models pre-trained on Internet-scale data. As the robot moves around, VLMaps uses a pre-trained visual-language model to compute dense per-pixel embeddings from posed RGB camera views, and integrates them into a large map-sized 3D tensor aligned with an existing 3D reconstruction of the physical world. This representation allows the system to localize landmarks given their natural language descriptions (such as “a book on a shelf in the living room”) by comparing their text embeddings to all locations in the tensor and finding the closest match. Querying these target locations can be used directly as goal coordinates for language-conditioned navigation, as primitive API function calls for Code as Policies to process spatial goals (e.g., code-writing models interpret “in between” as arithmetic between two locations), or to sequence multiple navigation goals for long-horizon instructions.

# move first to the left side of the counter, then move between the sink and the oven, then move back and forth to the sofa and the table twice.
robot.move_to_left('counter')
robot.move_in_between('sink', 'oven')
pos1 = robot.get_pos('sofa')
pos2 = robot.get_pos('table')
for i in range(2):
   robot.move_to(pos1)
   robot.move_to(pos2)
# move 2 meters north of the laptop, then move 3 meters rightward.
robot.move_north('laptop')
robot.face('laptop')
robot.turn(180)
robot.move_forward(2)
robot.turn(90)
robot.move_forward(3)


VLMaps can be used to return the map coordinates of landmarks given natural language descriptions, which can be wrapped as a primitive API function call for Code as Policies to sequence multiple goals long-horizon navigation instructions.

Results

We evaluate VLMaps on challenging zero-shot object-goal and spatial-goal navigation tasks in Habitat and Matterport3D, without additional training or fine-tuning. The robot is asked to navigate to four subgoals sequentially specified in natural language. We observe that VLMaps significantly outperforms strong baselines (including CoW and LM-Nav) by up to 17% due to its improved visuo-lingual grounding.

Tasks    Number of subgoals in a row       Independent
subgoals
     
   1 2 3 4   
LM-Nav    26 4 1 1       26   
CoW    42 15 7 3       36   
CLIP MAP    33 8 2 0       30   
VLMaps (ours)      59 34 22 15       59   
GT Map    91 78 71 67       85   

The VLMaps-approach performs favorably over alternative open-vocabulary baselines on multi-object navigation (success rate [%]) and specifically excels on longer-horizon tasks with multiple sub-goals.

A key advantage of VLMaps is its ability to understand spatial goals, such as “go in between the sofa and TV” or “move three meters to the right of the chair”. Experiments for long-horizon spatial-goal navigation show an improvement by up to 29%. To gain more insights into the regions in the map that are activated for different language queries, we visualize the heatmaps for the object type “chair”.

The improved vision and language grounding capabilities of VLMaps, which contains significantly fewer false positives than competing approaches, enable it to navigate zero-shot to landmarks using language descriptions.

Open-vocabulary obstacle maps

A single VLMap of the same environment can also be used to build open-vocabulary obstacle maps for path planning. This is done by taking the union of binary-thresholded detection maps over a list of landmark categories that the robot can or cannot traverse (such as “tables”, “chairs”, “walls”, etc.). This is useful since robots with different morphologies may move around in the same environment differently. For example, “tables” are obstacles for a large mobile robot, but may be traversable for a drone. We observe that using VLMaps to create multiple robot-specific obstacle maps improves navigation efficiency by up to 4% (measured in terms of task success rates weighted by path length) over using a single shared obstacle map for each robot. See the paper for more details.

Experiments with a mobile robot (LoCoBot) and drone in AI2THOR simulated environments. Left: Top-down view of an environment. Middle columns: Agents’ observations during navigation. Right: Obstacle maps generated for different embodiments with corresponding navigation paths.

Conclusion

VLMaps takes an initial step towards grounding pre-trained visual-language information onto spatial map representations that can be used by robots for navigation. Experiments in simulated and real environments show that VLMaps can enable language-using robots to (i) index landmarks (or spatial locations relative to them) given their natural language descriptions, and (ii) generate open-vocabulary obstacle maps for path planning. Extending VLMaps to handle more dynamic environments (e.g., with moving people) is an interesting avenue for future work.

Open-source release

We have released the code needed to reproduce our experiments and an interactive simulated robot demo on the project website, which also contains additional videos and code to benchmark agents in simulation.

Acknowledgments

We would like to thank the co-authors of this research: Chenguang Huang and Wolfram Burgard.

[mailpoet_form id="1"]

Visual language maps for robot navigation Republished from Source http://ai.googleblog.com/2023/03/visual-language-maps-for-robot.html via http://feeds.feedburner.com/blogspot/gJZg

crowdsourcing week

Written by Google AI · Categorized: AI · Tagged: AI

Mar 22 2023

Robot caterpillar demonstrates new approach to locomotion for soft robotics

Researchers at North Carolina State University have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot’s movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.

“A caterpillar’s movement is controlled by local curvature of its body — its body curves differently when it pulls itself forward than it does when it pushes itself backward,” says Yong Zhu, corresponding author of a paper on the work and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State. “We’ve drawn inspiration from the caterpillar’s biomechanics to mimic that local curvature, and use nanowire heaters to control similar curvature and movement in the caterpillar-bot.

“Engineering soft robots that can move in two different directions is a significant challenge in soft robotics,” Zhu says. “The embedded nanowire heaters allow us to control the movement of the robot in two ways. We can control which sections of the robot bend by controlling the pattern of heating in the soft robot. And we can control the extent to which those sections bend by controlling the amount of heat being applied.”

The caterpillar-bot consists of two layers of polymer, which respond differently when exposed to heat. The bottom layer shrinks, or contracts, when exposed to heat. The top layer expands when exposed to heat. A pattern of silver nanowires is embedded in the expanding layer of polymer. The pattern includes multiple lead points where researchers can apply an electric current. The researchers can control which sections of the nanowire pattern heat up by applying an electric current to different lead points, and can control the amount of heat by applying more or less current.

“We demonstrated that the caterpillar-bot is capable of pulling itself forward and pushing itself backward,” says Shuang Wu, first author of the paper and a postdoctoral researcher at NC State. “In general, the more current we applied, the faster it would move in either direction. However, we found that there was an optimal cycle, which gave the polymer time to cool — effectively allowing the ‘muscle’ to relax before contracting again. If we tried to cycle the caterpillar-bot too quickly, the body did not have time to ‘relax’ before contracting again, which impaired its movement.”

The researchers also demonstrated that the caterpillar-bot’s movement could be controlled to the point where users were able steer it under a very low gap — similar to guiding the robot to slip under a door. In essence, the researchers could control both forward and backward motion as well as how high the robot bent upwards at any point in that process.

“This approach to driving motion in a soft robot is highly energy efficient, and we’re interested in exploring ways that we could make this process even more efficient,” Zhu says. “Additional next steps include integrating this approach to soft robot locomotion with sensors or other technologies for use in various applications — such as search-and-rescue devices.”

The work was done with support from the National Science Foundation, under grants 2122841, 2005374 and 2126072; and from the National Institutes of Health, under grant number 1R01HD108473.

[mailpoet_form id="1"]

Robot caterpillar demonstrates new approach to locomotion for soft robotics Republished from Source https://www.sciencedaily.com/releases/2023/03/230322190913.htm via https://www.sciencedaily.com/rss/computers_math/artificial_intelligence.xml

crowdsourcing week

Written by bizbuildermike · Categorized: AI · Tagged: AI

Mar 22 2023

Biodegradable artificial muscles: Going green in the field of soft robotics

Artificial muscles are a progressing technology that could one day enable robots to function like living organisms. Such muscles open up new possibilities for how robots can shape the world around us; from assistive wearable devices that can redefine our physical abilities at old age, to rescue robots that can navigate rubble in search of the missing. But just because artificial muscles can have a strong societal impact during use, doesn’t mean they have to leave a strong environmental impact after use.

The topic of sustainability in soft robotics has now been brought into focus by an international team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart (Germany), the Johannes Kepler University (JKU) in Linz (Austria), and the University of Colorado (CU Boulder), Boulder (USA). The scientists collaborated to design a fully biodegradable, high performance artificial muscle — based on gelatin, oil, and bioplastics. They show the potential of this biodegradable technology by using it to animate a robotic gripper, which could be especially useful in single-use deployments such as for waste collection. At the end of life, these artificial muscles can be disposed of in municipal compost bins; under monitored conditions, they fully biodegrade within six months.

“We see an urgent need for sustainable materials in the accelerating field of soft robotics. Biodegradable parts could offer a sustainable solution especially for single-use applications, like for medical operations, search-and-rescue missions, and manipulation of hazardous substances. Instead of accumulating in landfills at the end of product life, the robots of the future could become compost for future plant growth,” says Ellen Rumley, a visiting scientist from CU Boulder working in the Robotic Materials Department at MPI-IS. Rumley is co-first author of the paper “Biodegradable electrohydraulic actuators for sustainable soft robots” which will be published in Science Advances on March 22, 2023.

Specifically, the team of researchers built an electrically driven artificial muscle called HASEL. In essence, HASELs are oil-filled plastic pouches that are partially covered by a pair of electrical conductors called electrodes. Applying a high voltage across the electrode pair causes opposing charges to build on them, generating a force between them that pushes oil to an electrode-free region of the pouch. This oil migration causes the pouch to contract, much like a real muscle. The key requirement for HASELs to deform is that the materials making up the plastic pouch and oil are electrical insulators, which can sustain the high electrical stresses generated by the charged electrodes.

One of the challenges for this project was to develop a conductive, soft, and fully biodegradable electrode. Researchers atJohannes Kepler University created a recipe based on a mixture of biopolymer gelatin and salts that can be directly cast onto HASEL actuators. “It was important for us to make electrodes suitable for these high-performance applications, but with readily available components and an accessible fabrication strategy. Since our presented formulation can be easily integrated in various types of electrically driven systems, it serves as a building block for future biodegradable applications,” states David Preninger, co-first author for this project and a scientist at the Soft Matter Physics Division at JKU.

The next step was finding suitable biodegradable plastics. Engineers for this type of materials are mainly concerned with properties like degradation rate or mechanical strength, not with electrical insulation; a requirement for HASELs that operate at a few thousand Volts. Nonetheless, some bioplastics showed good material compatibility with gelatin electrodes and sufficient electrical insulation. HASELs made from one specific material combination were even able to withstand 100,000 actuation cycles at several thousand Volts without signs of electrical failure or loss in performance. These biodegradable artificial muscles are electromechanically competitive with their non-biodegradable counterparts; an exciting result for promoting sustainability in artificial muscle technology.

“By showing the outstanding performance of this new materials system, we are giving an incentive for the robotics community to consider biodegradable materials as a viable material option for building robots,” Ellen Rumley continues. “The fact that we achieved such great results with bio-plastics hopefully also motivates other material scientists to create new materials with optimized electrical performance in mind.”

With green technology becoming ever more present, the team’s research project is an important step towards a paradigm shift in soft robotics. Using biodegradable materials for building artificial muscles is just one step towards paving a future for sustainable robotic technology.

[mailpoet_form id="1"]

Biodegradable artificial muscles: Going green in the field of soft robotics Republished from Source https://www.sciencedaily.com/releases/2023/03/230322190902.htm via https://www.sciencedaily.com/rss/computers_math/artificial_intelligence.xml

crowdsourcing week

Written by bizbuildermike · Categorized: AI · Tagged: AI

Mar 21 2023

Learning to grow machine-learning models

It’s no secret that OpenAI’s ChatGPT has some incredible capabilities — for instance, the chatbot can write poetry that resembles Shakespearean sonnets or debug code for a computer program. These abilities are made possible by the massive machine-learning model that ChatGPT is built upon. Researchers have found that when these types of models become large enough, extraordinary capabilities emerge.

But bigger models also require more time and money to train. The training process involves showing hundreds of billions of examples to a model. Gathering so much data is an involved process in itself. Then come the monetary and environmental costs of running many powerful computers for days or weeks to train a model that may have billions of parameters. 

“It’s been estimated that training models at the scale of what ChatGPT is hypothesized to run on could take millions of dollars, just for a single training run. Can we improve the efficiency of these training methods, so we can still get good models in less time and for less money? We propose to do this by leveraging smaller language models that have previously been trained,” says Yoon Kim, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Rather than discarding a previous version of a model, Kim and his collaborators use it as the building blocks for a new model. Using machine learning, their method learns to “grow” a larger model from a smaller model in a way that encodes knowledge the smaller model has already gained. This enables faster training of the larger model.

Their technique saves about 50 percent of the computational cost required to train a large model, compared to methods that train a new model from scratch. Plus, the models trained using the MIT method performed as well as, or better than, models trained with other techniques that also use smaller models to enable faster training of larger models.

Reducing the time it takes to train huge models could help researchers make advancements faster with less expense, while also reducing the carbon emissions generated during the training process. It could also enable smaller research groups to work with these massive models, potentially opening the door to many new advances.

“As we look to democratize these types of technologies, making training faster and less expensive will become more important,” says Kim, senior author of a paper on this technique.

Kim and his graduate student Lucas Torroba Hennigen wrote the paper with lead author Peihao Wang, a graduate student at the University of Texas at Austin, as well as others at the MIT-IBM Watson AI Lab and Columbia University. The research will be presented at the International Conference on Learning Representations.

The bigger the better

Large language models like GPT-3, which is at the core of ChatGPT, are built using a neural network architecture called a transformer. A neural network, loosely based on the human brain, is composed of layers of interconnected nodes, or “neurons.” Each neuron contains parameters, which are variables learned during the training process that the neuron uses to process data.

Transformer architectures are unique because, as these types of neural network models get bigger, they achieve much better results.

“This has led to an arms race of companies trying to train larger and larger transformers on larger and larger datasets. More so than other architectures, it seems that transformer networks get much better with scaling. We’re just not exactly sure why this is the case,” Kim says.

These models often have hundreds of millions or billions of learnable parameters. Training all these parameters from scratch is expensive, so researchers seek to accelerate the process.

One effective technique is known as model growth. Using the model growth method, researchers can increase the size of a transformer by copying neurons, or even entire layers of a previous version of the network, then stacking them on top. They can make a network wider by adding new neurons to a layer or make it deeper by adding additional layers of neurons.

In contrast to previous approaches for model growth, parameters associated with the new neurons in the expanded transformer are not just copies of the smaller network’s parameters, Kim explains. Rather, they are learned combinations of the parameters of the smaller model.

Learning to grow

Kim and his collaborators use machine learning to learn a linear mapping of the parameters of the smaller model. This linear map is a mathematical operation that transforms a set of input values, in this case the smaller model’s parameters, to a set of output values, in this case the parameters of the larger model.

Their method, which they call a learned Linear Growth Operator (LiGO), learns to expand the width and depth of larger network from the parameters of a smaller network in a data-driven way.

But the smaller model may actually be quite large — perhaps it has a hundred million parameters — and researchers might want to make a model with a billion parameters. So the LiGO technique breaks the linear map into smaller pieces that a machine-learning algorithm can handle.

LiGO also expands width and depth simultaneously, which makes it more efficient than other methods. A user can tune how wide and deep they want the larger model to be when they input the smaller model and its parameters, Kim explains.

When they compared their technique to the process of training a new model from scratch, as well as to model-growth methods, it was faster than all the baselines. Their method saves about 50 percent of the computational costs required to train both vision and language models, while often improving performance.

The researchers also found they could use LiGO to accelerate transformer training even when they didn’t have access to a smaller, pretrained model.

“I was surprised by how much better all the methods, including ours, did compared to the random initialization, train-from-scratch baselines.” Kim says.

In the future, Kim and his collaborators are looking forward to applying LiGO to even larger models.

The work was funded, in part, by the MIT-IBM Watson AI Lab, Amazon, the IBM Research AI Hardware Center, Center for Computational Innovation at Rensselaer Polytechnic Institute, and the U.S. Army Research Office.

[mailpoet_form id="1"]

Learning to grow machine-learning models Republished from Source https://news.mit.edu/2023/new-technique-machine-learning-models-0322 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Adam Zewe MIT News Office · Categorized: AI, MIT AI · Tagged: AI, MIT AI

Mar 20 2023

Detailed images from space offer clearer picture of drought effects on plants

“MIT is a place where dreams come true,” says César Terrer, an assistant professor in the Department of Civil and Environmental Engineering. Here at MIT, Terrer says he’s given the resources needed to explore ideas he finds most exciting, and at the top of his list is climate science. In particular, he is interested in plant-soil interactions, and how the two can mitigate impacts of climate change. In 2022, Terrer received seed grant funding from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to produce drought monitoring systems for farmers. The project is leveraging a new generation of remote sensing devices to provide high-resolution plant water stress at regional to global scales.

Growing up in Granada, Spain, Terrer always had an aptitude and passion for science. He studied environmental science at the University of Murcia, where he interned in the Department of Ecology. Using computational analysis tools, he worked on modeling species distribution in response to human development. Early on in his undergraduate experience, Terrer says he regarded his professors as “superheroes” with a kind of scholarly prowess. He knew he wanted to follow in their footsteps by one day working as a faculty member in academia. Of course, there would be many steps along the way before achieving that dream. 

Upon completing his undergraduate studies, Terrer set his sights on exciting and adventurous research roles. He thought perhaps he would conduct field work in the Amazon, engaging with native communities. But when the opportunity arose to work in Australia on a state-of-the-art climate change experiment that simulates future levels of carbon dioxide, he headed south to study how plants react to CO2 in a biome of native Australian eucalyptus trees. It was during this experience that Terrer started to take a keen interest in the carbon cycle and the capacity of ecosystems to buffer rising levels of CO2 caused by human activity.

Around 2014, he began to delve deeper into the carbon cycle as he began his doctoral studies at Imperial College London. The primary question Terrer sought to answer during his PhD was “will plants be able to absorb predicted future levels of CO2 in the atmosphere?” To answer the question, Terrer became an early adopter of artificial intelligence, machine learning, and remote sensing to analyze data from real-life, global climate change experiments. His findings from these “ground truth” values and observations resulted in a paper in the journal Science. In it, he claimed that climate models most likely overestimated how much carbon plants will be able to absorb by the end of the century, by a factor of three. 

After postdoctoral positions at Stanford University and the Universitat Autonoma de Barcelona, followed by a prestigious Lawrence Fellowship, Terrer says he had “too many ideas and not enough time to accomplish all those ideas.” He knew it was time to lead his own group. Not long after applying for faculty positions, he landed at MIT. 

New ways to monitor drought

Terrer is employing similar methods to those he used during his PhD to analyze data from all over the world for his J-WAFS project. He and postdoc Wenzhe Jiao collect data from remote sensing satellites and field experiments and use machine learning to come up with new ways to monitor drought. Terrer says Jiao is a “remote sensing wizard,” who fuses data from different satellite products to understand the water cycle. With Jiao’s hydrology expertise and Terrer’s knowledge of plants, soil, and the carbon cycle, the duo is a formidable team to tackle this project.

According to the U.N. World Meteorological Organization, the number and duration of droughts has increased by 29 percent since 2000, as compared to the two previous decades. From the Horn of Africa to the Western United States, drought is devastating vegetation and severely stressing water supplies, compromising food production and spiking food insecurity. Drought monitoring can offer fundamental information on drought location, frequency, and severity, but assessing the impact of drought on vegetation is extremely challenging. This is because plants’ sensitivity to water deficits varies across species and ecosystems. 

Terrer and Jiao are able to obtain a clearer picture of how drought is affecting plants by employing the latest generation of remote sensing observations, which offer images of the planet with incredible spatial and temporal resolution. Satellite products such as Sentinel, Landsat, and Planet can provide daily images from space with such high resolution that individual trees can be discerned. Along with the images and datasets from satellites, the team is using ground-based observations from meteorological data. They are also using the MIT SuperCloud at MIT Lincoln Laboratory to process and analyze all of the data sets. The J-WAFS project is among one of the first to leverage high-resolution data to quantitatively measure plant drought impacts in the United States with the hopes of expanding to a global assessment in the future.

Assisting farmers and resource managers 

Every week, the U.S. Drought Monitor provides a map of drought conditions in the United States. The map has zero resolution and is more of a drought recap or summary, unable to predict future drought scenarios. The lack of a comprehensive spatiotemporal evaluation of historic and future drought impacts on global vegetation productivity is detrimental to farmers both in the United States and worldwide.  

Terrer and Jiao plan to generate metrics for plant water stress at an unprecedented resolution of 10-30 meters. This means that they will be able to provide drought monitoring maps at the scale of a typical U.S. farm, giving farmers more precise, useful data every one to two days. The team will use the information from the satellites to monitor plant growth and soil moisture, as well as the time lag of plant growth response to soil moisture. In this way, Terrer and Jiao say they will eventually be able to create a kind of “plant water stress forecast” that may be able to predict adverse impacts of drought four weeks in advance. “According to the current soil moisture and lagged response time, we hope to predict plant water stress in the future,” says Jiao. 

The expected outcomes of this project will give farmers, land and water resource managers, and decision-makers more accurate data at the farm-specific level, allowing for better drought preparation, mitigation, and adaptation. “We expect to make our data open-access online, after we finish the project, so that farmers and other stakeholders can use the maps as tools,” says Jiao. 

Terrer adds that the project “has the potential to help us better understand the future states of climate systems, and also identify the regional hot spots more likely to experience water crises at the national, state, local, and tribal government scales.” He also expects the project will enhance our understanding of global carbon-water-energy cycle responses to drought, with applications in determining climate change impacts on natural ecosystems as a whole.

[mailpoet_form id="1"]

Detailed images from space offer clearer picture of drought effects on plants Republished from Source https://news.mit.edu/2023/detailed-images-space-offer-clearer-picture-drought-effects-plants-0320 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Carolyn Blais Abdul Latif Jameel Water and Food Systems Lab · Categorized: AI, MIT AI · Tagged: AI, MIT AI

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 53
  • Go to Next Page »
  • Twitter
  • Facebook
  • About Us
  • LinkedIn
  • ANTI-SPAM POLICY
  • Google+
  • API Terms and Conditions
  • RSS
  • Archive Page
  • Biz Builder Mike is all about New World Marketing
  • Cryptocurrency Exchange
  • Digital Millennium Copyright Act (DMCA) Notice
  • DMCA Safe Harbor Explained: Why Your Website Needs a DMCA/Copyright Policy
  • Marketing? Well, how hard can that be?
  • Michael Noel
  • Michael Noel CBP
  • Noels Law of decentralization

Copyright © 2023 · Altitude Pro on Genesis Framework · WordPress · Log in