Imitation of Intelligence : Exploring Artificial Intelligence!

What is the difference between “calculate” and “compute”?

Light & flexible

I assure you, we are not going to discuss such quintessential terms related to computing world, which might bore some of us, as it might have given the impression 😀

But this is something out of curiosity about the crux of what we are going to go through.

 

 

So, the calculation involves an arithmetic process. Computation is involved in the implementation of non-arithmetic steps of the algorithm which actually brings things up to the calculation.

You got the idea where I am going with this right? We can try to visualize every aspect of data processing stages from data collection, cleansing, processing and then transforming it through mathematical operations to map data into something which makes more sense i.e. “Insight“. But the intelligence used for such meaningful transformation used to be the human intervention which now can be “Artificial” as per the new digital trend.

Getting to know …

Artificial Intelligence in the industry will change everything about the way we produce, manufacture and deliver. Cognitive computing, machine learning, natural language processing – different aspects have emerged as the development of the technology has progressed in recent years. But they all encapsulated the idea that machines could one day be taught to learn how to adapt by themselves, rather than having to be spoon-fed every instruction for every eventuality. There are certain important emerging digital trends we can track considering the technology & future that are together converging very fast. Years ago the industrial revolution immutably remolded society and another revolution is underway with potentially even further reaching consequences. These digital trends are all potentially disruptive unless we plan ahead for the impact and change that is coming. Likely things benefited will be more agility, smarter business processes, and better productivity by converging focus and efforts on right things.

Goals of Artificial Intelligence

Artificial intelligence (AI) has become ubiquitous in business in every industry where decision making is being fundamentally transformed by Machines brains. The need for faster and smarter decisions and the management of big data that can make the difference is what is driving this trend. The convergence of big data with AI is inevitable as the automation of smarter decision-making is the next evolution of big data. while adapting to this change some will inevitably prosper and some will fail. Those that manage to succeed are likely to be those which can manage to see beyond the hype and understand how this technology can add real value and drive positive change.

The best way to look at AI is automating things which have been worked on and implemented logically to solve the problem already. This will help to apply the existing problem-solving logic effectively, smartly using artificial brains and using human brain, efforts to focus on problems which still need more attention. The world where machines and devices all communicate with each other to get the work done (IoT), leaving us free to relax and enjoy life can be imagined through effective use of AI.

Let’s wonder around this digital trend and explore how can it be integrated for a better life 🙂

#bigdata is becoming the norm for many organizations, using it to profile people, analyze their behavioral patterns and inform their decision-making processes, whether that’s to determine a basic day-to-day life process or business strategic decisions.

#ArtificialIntelligence or #AI is stepping out of the world of science-fiction and into real life, providing the ‘thinking’ ability behind virtual personal assistants, automated robots, and smart cars.

#MachineLearning algorithms are providing intelligence for discovering patterns in the huge amount of data that traditional data analysis couldn’t hope to find, helping to detect fraud and diagnose diseases.

I have tried to interact with Google Home, AI application developed by Google and asked some questions to know information related to this trend, following is the conversation.

Me: What is Data Analytics?
Google Home: Data Analytics refers to qualitative and quantitative techniques and processes used to enhance productivity and business gain. Data is extracted and categorized to identify and analyze behavioral data and patterns, and techniques vary according to organizational requirements.

“Ok Google”

Me: What is Machine Learning?

Google Home: Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can change when exposed to new data. … Both systems search through data to look for patterns.

Me: What is Artificial Intelligence?
Google Home: the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Well, that’s cool.

Artificial Intelligence – often classified into one of two fundamental groups – applied and general.
Applied AI is more common – systems designed to intelligently trade stocks and shares, or maneuver an autonomous vehicle would fall into this category.
Generalized AI – systems or devices which can, in theory, handle any task – are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, it’s really more accurate to think of it as the current state-of-the-art.

The relation between Artificial Intelligence and Machine Learning:

Artificial Intelligence, Human Intelligence exhibited by Machines, is the broader concept of machines being able to perform tasks which imitate human intelligence i.e artificial.
Machine Learning,  out of many other goals/approaches of AI, an approach to achieve Artificial Intelligence, is an application of AI revolving around the idea that let machines learn for themselves given access to information.

Deep Learning has enabled many practical applications of Machine Learning and in turn the overall field of AI. It breaks down tasks in ways that make all kinds of machine obliges seem possible, even likely.

Concept evolution!

As technology and understanding of how human minds work has progressed, our concept of what constitutes AI has changed. Rather than progressively complex calculations, work in the field of AI concentrated on imitating human decision-making processes and carrying out tasks in even more hominid ways. Being innovations have been in place, engineers realized that rather than training computers and machines, it would be far more efficient to code them to think and learn human brain and provide the internet as a learning platform to give them access to all of the information in the world.

To make computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy, and lack of bias, development of neural networks played the key role.


Going a step ahead to avoid this complexity of learning concepts of AI and algorithmic journey of ML, to provide with a platform to develop an AI application with simple logistic and freeing developer to focus on AI problem statement to solve is the next advancement.

Happy to see some leaders in the industry are taking interest in it and making complex technologies such as AI and ML available as a simple platform to create such voice/text assistant to address this perspective of data science.

Google api.ai
Amazon Alexa
Facebook wit.ai

And many in the market. Such initiatives will be always appreciated.

About Google API.AI – Understand Google api.ai and build AI Assistant


Looking at the other side of this …

There are concerns that this technology will lead to widespread unemployment which is beyond the scope of this discussion, but it does touch on the point we should consider. Employees are often a business’s biggest expense, but does that mean it’s sensible to think of AI as primarily a means of cutting HR costs?

I don’t think so.

Think about it!

The fully autonomous, AI-powered, human-free industrial operation seem to be away from becoming reality and human employees working alongside AI machines is likely to be the way of things. How can an intelligence developed by humans REPLACE a human? Surely it can replace repetitive mechanizable efforts of a human at some places where artificial intelligence can work. so if you’re looking to generate value in the near future, then thinking about ways to empower humans with technology, rather than replace them, is likely to be more productive.In doing these things we can free people to put all of our creativity, passion, and imagination into thinking about the bigger opportunities ahead of us.

Trends are only disruptive if we are unprepared to factor them into our strategy. How trends impact our workforce, customers, market, services and in turn our lives should be carefully pondered. And perhaps most importantly, a business needs a clear use case and a genuine perception of how, and why, they can gain value from it. With anything new and exuberant in business, there’s often a race to be involved, driven primarily by a fear of being left behind. Scrambling into automating and smartening an enterprise without having a clear outlook of what you hope to achieve is a misdirection to intelligence.

As said by Mark Zukerberg, “A frustration I have is that a lot of people increasingly seem to equate an advertising business model with somehow being out of alignment with your customers, … I think it’s the most ridiculous concept. What, you think because you’re paying Apple that you’re somehow in alignment with them? If you were in alignment with them, then they’d make their products a lot cheaper!”

Another frustration we should feel is … we increasingly seem to diverge efforts put in various technology trends being out of alignment with their use and impact on our life, I think it’s even more ridiculous concept. To be productive, efforts need to be meticulous and put in the proper direction and AI can help find this direction quick and easy. If we were in alignment with the constructive use and right influence of technology trends, then it’d make our lives easier and happier!

Let’s embrace the change and explore integrity!

Image credits: Google

Recommending to watch.


References:-

– Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?
https://en.wikiquote.org
Difference between Artificial Intelligence and Machine Learning
https://www.slideshare.net
https://twitter.com/

Creating Custom Origin for Streamsets

Streamsets Data Collector:

StreamSets Data Collector is a lightweight and powerful engine that streams data in real time. It allows you to build continuous data pipelines, each of which consumes record-oriented data from a single origin, optionally operates on those records in one or more processors and writes data to one or more destinations.

Streamsets Origin Stage:

To define the flow of data for Data Collector, you configure a pipeline. A pipeline consists of stages that represents the origin and destination of the pipeline and any additional processing that you want to perform.

An origin stage represents the source for the pipeline.

For example, this pipeline, based on the SDC taxi data tutorial https://streamsets.com/documentation/datacollector/latest/help/#Tutorial/Overview.html which uses the Directory origin, four processors and the Hadoop File System destination:

 

pipeline

 

Stremsets comes bundled with many origin stage components to connect with almost all commonly used data sources and if you don’t find one for your source system, don’t worry  Streamsets APIs are there to help you in creating a customized origin stage for your system.

This blog explains how to get started writing your own custom Streamsets Origin stage to stream records from Amazon SQS(Simple Queue Service).

 Requirements: 

  • Java Installed
  • IDE(Eclipse/Intellij) setup
  • Streamset data collector

Creating and building the origin template

Follow the Streamset Datacollector documentation to download, install and run StreamSets Data Collector.

You will also need to download source for the Data Collector and its API. Just make sure that you have matching versions for the runtime and source, so you might find it easier to download tarballs from the relevant GitHub release pages rather than using git clone:

Build both the Data Collector and its API:

$ cd datacollector-api
$ mvn clean install -DskipTests ...output omitted...
$ cd ../datacollector
$ mvn clean install -DskipTests ...output omitted...

Maven puts the library JARs in its repository, so they’re available when we build our custom origin:

Create Skeleton Project:

Now create a new custom stage project using the Maven archetype:

$ mvn archetype:generate -DarchetypeGroupId=com.streamsets -DarchetypeArtifactId=streamsets-datacollector-stage-lib-tutorial -DarchetypeVersion={version} -DinteractiveMode=true

The above command uses streamsets-datacollector-stage-lib-tutorial maven archetype to create the skeleton project and this is the easiest way to get started developing your own stages.

Provide values for property groupId, artifactId, version and package

Maven generates a template project from the archetype in a directory with the artifactId you provided as its name. As you can see, there is template code for an origin, a processor and a destination:

 

structure

 

Origin template classes: 

In the above figure following are the important classes under Origin stage:

  • Groups.java: Responsible to hold the labels for the configuration tabs in datacollector UI
  • SampleDsource.java: Contains stage and its configurations definitions and assigns those configurations to respective groups
  • SampleSource.java: This is the place where the actual logic to read data from the source is written

Basic custom origin stage

Now you can build the template:

$ cd example_stage
$ mvn clean package -DskipTests

Extract the tarball to SDC’s user-libs directory, restart SDC, and you should see the sample stages in the stage library

$ cd ~/streamsets-datacollector-{version}/user-libs/ 
$ tar xvfz {new project root dir}/target/example_stage-1.0-SNAPSHOT.tar.gz x example_stage/lib/example_stage-1.0-SNAPSHOT.jar  

Restart the data collector and you will be able to see sample origin in the stage library panel

 

stage_panel 

Understanding the Origin Template Code
Let’s walk through the template code, starting with Groups.java.

Groups.java

The Groups enumeration holds the label for the configuration tab. Replace the label to have the label for AWS SQS

@GenerateResourceBundle
public enum Groups implements Label {
  SQS("AWS SQS"),
  ;
  private final String label;

SampleDSource.java

Stage and Its configurations definitions

Inside SampleDSource.java define the stage and its configurations and assign those configurations to respective groups. In our case we require AWS credentials, SQS endpoint and queue name to in order to retrieve messages from SQS.

@StageDef(
    version = 1,
    label = "SQS Origin",
    description = "",
    icon = "default.png",
    execution = ExecutionMode.STANDALONE,
    recordsByRef = true,
    onlineHelpRefUrl = ""
)
@ConfigGroups(value = Groups.class)
@GenerateResourceBundle
public class SampleDSource extends SampleSource {

  @ConfigDef(
          required = true,
          type = ConfigDef.Type.STRING,
          defaultValue = "",
          label = "Access Key",
          displayPosition = 10,
          group = "SQS"
  )
  public String access_key;

  @ConfigDef(
          required = true,
          type = ConfigDef.Type.STRING,
          defaultValue = "",
          label = "Secrete Key",
          displayPosition = 10,
          group = "SQS"
  )
  public String secrete_key;

  @ConfigDef(
      required = true,
      type = ConfigDef.Type.STRING,
      defaultValue = "",
      label = "Name",
      displayPosition = 10,
      group = "SQS"
  )
  public String queue_name;

  @ConfigDef(
          required = true,
          type = ConfigDef.Type.STRING,
          defaultValue = "",
          label = "End Point",
          displayPosition = 10,
          group = "SQS"
  )
  public String end_point;

  /** Delete message once read from Queue */
  @ConfigDef(
          required = true,
          type = ConfigDef.Type.BOOLEAN,
          defaultValue = "",
          label = "Delete Message",
          displayPosition = 10,
          group = "SQS"
  )
  public Boolean delete_flag;


  /** {@inheritDoc} */
  @Override
  public String getEndPoint() {
    return end_point;
  }

  /** {@inheritDoc} */
  @Override
  public String getQueueName() {
    return queue_name;
  }


  /** {@inheritDoc} */
  @Override
  public String getAccessKey() {
    return access_key;
  }

  /** {@inheritDoc} */
  @Override
  public String getSecreteKey() {
    return secrete_key;
  }

  /** {@inheritDoc} */
  @Override
  public Boolean getDeleteFlag() {
    return delete_flag;
  }
}

SampleSource.java

Read configurations and implement actual logic to read messages  from origin

Source extend BaseSource Interface from Streamset API

public abstract class SampleSource extends BaseSource {

An abstract method allows the source to get configuration data from its subclass:

The SampleSource class uses SampleDsource sub class to get access to the UI configurations. Remove the getConfig method with following methods

/**
 * Gives access to the UI configuration of the stage provided by the {@link SampleDSource} class.
 */
public abstract String getEndPoint();
public abstract String getQueueName();
public abstract String getAccessKey();
public abstract String getSecreteKey();
public abstract Boolean getDeleteFlag();

Validate Pipeline Configuration

SDC calls the init() method when validating and running a pipeline. The sample shows how to report configuration errors

@Override
protected List<ConfigIssue> init() {
    // Validate configuration values and open any required resources.
    List<ConfigIssue> issues = super.init();

    if (getEndPoint().isEmpty() || getQueueName().isEmpty() || getAccessKey().isEmpty() || getSecreteKey().isEmpty()) {
        issues.add(
                getContext().createConfigIssue(
                        Groups.SQS.name(), "config", Errors.SAMPLE_00, "Povide required parameters.."
                )
        );
    }

    // If issues is not empty, the UI will inform the user of each configuration issue in the list.
    return issues;
}

SDC calls destroy() during validation, and when a pipeline is stopped

/**
 * {@inheritDoc}
 */
@Override
public void destroy() {
    // Clean up any open resources.
    super.destroy();
}

Put custom logic to read data from source system

Produce method is one where we write the actual logic to read the data from source system. Replace the code with following code logic to read messages from SQS

public String produce(String lastSourceOffset, int maxBatchSize, BatchMaker batchMaker) throws StageException {
    // Offsets can vary depending on the data source. Here we use an integer as an example only.
    long nextSourceOffset = 0;
    if (lastSourceOffset != null) {
        nextSourceOffset = Long.parseLong(lastSourceOffset);
    }

    int numRecords = 0;

    // Create records and add to batch. Records must have a string id. This can include the source offset
    // or other metadata to help uniquely identify the record itself.

    AWSSQSUtil awssqsUtil = new AWSSQSUtil(getAccessKey(),getSecreteKey(),getQueueName(),getEndPoint());

    String queuName = awssqsUtil.getQueueName();
    String queueUrl = awssqsUtil.getQueueUrl(queuName);

    //maximum number of meesage that can be retrieve in one request
    int maxMessagCount = 10;

        List<Message> messages = awssqsUtil.getMessagesFromQueue(queueUrl,maxMessagCount);
        for (Message message : messages) {
            Record record = getContext().createRecord("messageId::" + message.getMessageId());
            Map<String, Field> map = new HashMap<>();
            map.put("receipt_handle", Field.create(message.getReceiptHandle()));
            map.put("md5_of_body", Field.create(message.getMD5OfBody()));
            map.put("body", Field.create(message.getBody()));

            JSONObject attributeJson = new JSONObject();

            for (Map.Entry<String, String> entry : message.getAttributes().entrySet()) {
                attributeJson.put(entry.getKey(), entry.getValue());
            }

            map.put("attribute_list", Field.create(attributeJson.toString()));

            record.set(Field.create(map));
            batchMaker.addRecord(record);
            ++nextSourceOffset;
            ++numRecords;
            if(getDeleteFlag()){
                awssqsUtil.deleteMessageFromQueue(queueUrl,message);
            }
        }
    return String.valueOf(nextSourceOffset);
}

Errors.java

Create custom Errors messages

To create stage specific error messages implement ErrorCode Interface

@GenerateResourceBundle
public enum Errors implements ErrorCode {

  SAMPLE_00("A configuration is invalid because: {}"),
  SAMPLE_01("Specific reason writing record failed: {}"),
  ;
  private final String msg;

  Errors(String msg) {
    this.msg = msg;
  }

  /** {@inheritDoc} */
  @Override
  public String getCode() {
    return name();
  }

  /** {@inheritDoc} */
  @Override
  public String getMessage() {
    return msg;
  }
}

Create the pipeline with custom origin

Follow the Build, Extract and Restart phase as done earlier and create the pipeline using the SQS Origin and provide configuration values. The pipeline will read click logs from SQS and extracts out the clicks which have been made from a particular browser and write it to the loca file system.

screen-shot-2016-11-24-at-3-05-06-pm

screen-shot-2016-11-24-at-3-16-16-pm

 

Run the pipeline and you will see the messages streaming from the SQS queue.

screen-shot-2016-11-24-at-3-26-20-pm

 

Congratulations!!! You have successfully created your first customized origin stage.