Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error after generating ant 1 #3

Closed
isaacgerg opened this issue May 22, 2019 · 6 comments
Closed

Error after generating ant 1 #3

isaacgerg opened this issue May 22, 2019 · 6 comments

Comments

@isaacgerg
Copy link

I get the following error everytime after generating ant 1

builtins.ValueError: Tensor("Adam/iterations:0", shape=(), dtype=resource) must be from the same graph as Tensor("training/Adam/Const:0", shape=(), dtype=int64).

Prior output is:

=======
Ant: 0x5f7a92e8
Loss: 0.531042
Accuracy: 0.750683
Path: InputNode(shape:(256, 256, 1)) -> Conv2DNode(kernel_size:1, filter_count:64, activation:ReLU) -> FlattenNode() -> OutputNode(output_size:1, activation:Sigmoid)
Hash: 2510cfe0ba8648855dc73f4c2cb8e7ff75878eeee906c211c51f470bc6ff0547

---------------------------Current search depth is 1----------------------------
--------------------------------GENERATING ANT 1--------------------------------
Train on 5270 samples, validate on 586 samples

@isaacgerg
Copy link
Author

I was able to fix this issue (so far) by having free_gpu() simply return without doing K.clear_session().

@Pattio
Copy link
Owner

Pattio commented May 22, 2019

Hmm that's interesting I just ran the test with newest version in Google Colab and it works without any errors. Could you please provide following details:

  1. Operating system
  2. TensorFlow version
  3. Training sample code

@isaacgerg
Copy link
Author

  1. Windows 7 x64
  2. 1.13.1
dataset = Dataset(training_examples=x_train, training_labels=y_train, testing_examples=x_train, testing_labels=y_train)
backend = TFKerasBackend(dataset=dataset, optimizer=tf.keras.optimizers.Adam(1e-4))
deepswarm = DeepSwarm(backend=backend)
topology = deepswarm.find_topology()
trained_topology = deepswarm.train_topology(topology, 50)

the yaml

DeepSwarm:
    save_folder:
    metrics: accuracy
    max_depth: 15
    reuse_patience: 1

    aco:
        pheromone:
            start: 0.1
            decay: 0.1
            evaporation: 0.1
            verbose: False
        greediness: 0.5
        ant_count: 16

    backend:
        epochs: 15
        batch_size: 16
        patience: 5
        loss: binary_crossentropy
        verbose: True

    spatial_nodes: [InputNode, Conv2DNode, DropoutSpatialNode, BatchNormalizationNode, Pool2DNode]
    flat_nodes: [FlattenNode, DenseNode, DropoutFlatNode, BatchNormalizationFlatNode]

Nodes:

    InputNode:
        type: Input
        attributes: 
            shape: [!!python/tuple [256, 256, 1]]
        transitions:
            Conv2DNode: 1.0

    Conv2DNode:
        type: Conv2D
        attributes:
            filter_count: [32, 64, 128]
            kernel_size: [1, 3, 5]
            activation: [ReLU]
        transitions:
            Conv2DNode: 0.8
            Pool2DNode: 1.2
            FlattenNode: 1.0
            DropoutSpatialNode: 1.1
            BatchNormalizationNode: 1.2
    
    DropoutSpatialNode:
        type: Dropout
        attributes:
            rate: [0.1, 0.3]
        transitions:
            Conv2DNode: 1.1
            Pool2DNode: 1.0
            FlattenNode: 1.0
            BatchNormalizationNode: 1.1

    BatchNormalizationNode:
        type: BatchNormalization
        attributes: {}
        transitions:
            Conv2DNode: 1.1
            Pool2DNode: 1.1
            DropoutSpatialNode: 1.0
            FlattenNode: 1.0

    Pool2DNode:
        type: Pool2D
        attributes:
            pool_type: [max, average]
            pool_size: [2]
            stride: [2, 3]
        transitions:
            Conv2DNode: 1.1
            FlattenNode: 1.0
            BatchNormalizationNode: 1.1

    FlattenNode:
        type: Flatten
        attributes: {}
        transitions:
            DenseNode: 1.0
            OutputNode: 0.8
            BatchNormalizationFlatNode: 0.9

    DenseNode:
        type: Dense
        attributes:
            output_size: [64, 128]
            activation: [ReLU, Sigmoid]
        transitions:
            DenseNode: 0.8
            DropoutFlatNode: 1.2
            BatchNormalizationFlatNode: 1.2
            OutputNode: 1.0

    DropoutFlatNode:
        type: Dropout
        attributes:
            rate: [0.1, 0.3]
        transitions:
            DenseNode: 1.0
            BatchNormalizationFlatNode: 1.0
            OutputNode: 0.9

    BatchNormalizationFlatNode:
        type: BatchNormalization
        attributes: {}
        transitions:
            DenseNode: 1.1
            DropoutFlatNode: 1.1
            OutputNode: 0.9

    OutputNode:
        type: Output
        attributes:
            output_size: [1]
            activation: [Sigmoid]
        transitions: {}

@Pattio
Copy link
Owner

Pattio commented May 22, 2019

Thank you, I located the issue and will update you once it has been fixed.

@isaacgerg
Copy link
Author

Sure. Also, if you update all your open(WindowPath) calls to open(str(WindowPath)) you should be python 3.5 compatible. open() and Pathlib are only seamless in 3.6.

@Pattio
Copy link
Owner

Pattio commented May 22, 2019

Issue is now fixed in version 0.0.7. Regarding the Python version, thank you for the suggestion, but when developing the library decision was made to aim for Python 3.6, without any backward compatibility (as this allows easier development). At the moment I don't really see any problem with using 3.6 as the default version.

@Pattio Pattio closed this as completed May 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants